The face of cyber threats has transformed dramatically over the decades. At first, they emerged as hacks, viruses and denial of service attacks, often hatched by young computer whiz kids chasing thrills and bragging rights. Then, criminal organizations leveraged increasingly sophisticated tools and techniques to steal private customer data, shut down business operations, access confidential/sensitive corporate information and launch ransomware schemes.

Today, artificial intelligence (AI) is empowering threat actors with exponentially greater levels of efficiency, volume, velocity and efficacy. A single AI tool can do the jobs of literally hundreds – if not thousands – of human hackers and spammers, with the ability to learn, process, adapt and strike with unprecedented speed and precision. What’s more, like the T-1000 android assassin in Terminator 2, AI can impersonate anyone – your friends, family, co-workers and even potential romantic partners – to develop and unleash the next generation of attacks.

This evolution of AI tools and the resulting increase in AI-generated cyberattacks has put immense pressure on organizations over the past 12 months. In light of these incidents, the FBI recently issued a warning about “the escalating threat” of AI-enabled phishing/social engineering and voice/video-cloning scams.

“These AI-driven phishing attacks are characterized by their ability to craft convincing messages tailored to specific recipients (while) containing proper grammar and spelling,” according to the FBI, “increasing the likelihood of successful deception and data theft.”

In seeking to lend further insights, we conducted a comprehensive analysis of activity from January 2023 to March 2024, to get a better sense of the evolving practices and trends associated with cybercrime and AI. As a result, we came up with the following top four forms of AI-enhanced threats:

Chatbot abuse. Underground forums have made available exposed ChatGPT login credentials, chatbots which automatically code malicious scripts and ChatGPT “jailbreaks” (the use of prompts to bypass certain boundaries or restrictions programmed into AI). However, we noticed that interest in these chatbots declined toward the end of 2023, as cybercriminals learned how to manipulate ChatGPT prompts themselves to obtain desired outcomes.

Social engineering campaigns. In exploring the possibilities of self-directed ChatGPT prompts, cybercriminals have focused intently on social engineering to trigger phishing-linked malware and business email compromise (BEC) attacks, among other exploits. AI makes it all too easy for them to conduct automatic translation, construct phishing pages, generate text for BEC schemes and create scripts for call service operators. As the FBI noted, the increasing sophistication of the technology is making it more difficult than ever to distinguish potentially harmful spam from legitimate emails.

Deepfakes. While deepfakes have been around for years, AI is taking the concept to new avenues of deception. Before, deepfakes required extensive audio, photographic and/or video to set up the ruse. That’s how celebrity deepfakes grew so common, because the internet contains an abundance of content about people in the news. AI, however, is allowing adversaries to more readily leverage content to target common individuals and companies through disinformation campaigns in the form of social media posts impersonating and compromising people and businesses.

To cite one prominent example, the “Yahoo Boys” used deepfakes to carry out pseudo romance/sextortion scams – creating fake personas and gaining the trust of victims and tricking them into sending compromising photos – and then forcing the victims to pay money to avoid having the photos released publicly. In another example, a threat actor advertised synthetic audio and video deepfake services in November 2023, claiming to be able to generate voices in any language using AI, for the purposes of producing bogus advertisements, animated profile pictures, banners and promotional videos.

Know-your-customer (KYC) verification bypasses. Organizations have used KYC verification to confirm customers’ identity, financial activities and risk level in order to prevent fraud. Criminals, of course, are always seeking to circumvent the verification process and are now deploying AI to do so. A threat actor using the name, John Wick, allegedly operated a service called OnlyFake, used “neural networks” to make realistic-looking photos of identification cards. Another going by the name, *Maroon, advertised KYC verification bypass services that supposedly can unlock all accounts requiring facial verification, such as those which direct users to upload their photos in real-time from their phone camera.

If there is a common theme we found in our analysis, it’s that AI isn’t changing the intended end game for cybercriminals – it’s just making it much easier to get there, more swiftly and successfully. The technology allows for refinements which directly lead to more sophisticated, less detectable and more convincing threats. That’s why security teams should take heed of the described developments/trends as well as the FBI warning and pursue new solutions and techniques to counter ever-increasingly formidable, AI-enabled attacks. If history has taught us anything, it’s that the first step in effectively confronting a new threat is fully understanding what it is.

 

The post The Top 4 Forms of AI-Enabled Cyber Threats appeared first on Cybersecurity Insiders.

Sizable fines imposed for data breaches in recent years indicate that regulators are increasingly determined to crack down on organizations that fail to adequately protect consumer data. Meta, for example, was fined a record $1.3 billion in 2023 for violating European Union data protection regulations. 

This regulatory pressure is also influencing consumer behavior, with nearly two in five Americans (38%) using social media less frequently due to concerns about data privacy. With this in mind, experts at Kiteworks, which unifies, tracks, controls, and secures sensitive content communications with a Private Content Network, investigated leading social media platforms to understand how they harvest personal data.

What Types of Data Does Each Social Media App Collect?

The Data Collected Across Platforms

As stated in their privacy policies, Meta, X, and TikTok all collect personally identifiable information (PII), including username, password, email, phone number, date of birth, language, location, and address book uploads. 

All three social platforms also collect payment information and usage data, which details how users interact and engage with the platforms. Meta, X, and TikTok also collect content data, including posts, messages, photos, videos, and audio data.

How is the Data Used? 

While each privacy policy outlines slightly different uses for the information they gather, the most common use case is to personalize and enhance user experience by providing customized content and ads. Additionally, all three emphasize the importance of data collection to ensure safety and security and support research. 

Meta, for example, claims to use personal data to support the research and improvement of their products, including “personalizing features, content and recommendations”. Similarly, TikTok states that collected information can be used for “research, statistical, and survey purposes.” 

As of February 9, 2024, X revoked free access to its API, which previously allowed public posts on the platform to be used freely for research purposes. This change underscores the platform’s shift towards stricter control over user data. X has, however, stated that their API can be used to “programmatically retrieve and analyze X data,” ensuring that public information remains accessible for research.

Sharing Information

Meta, X, and TikTok indicate that public posts and content are viewable by anyone, depending on users’ profile privacy settings. For users with public accounts, their information is shared with partners and third parties for services, authentication, and advertising, as well as with legal entities for compliance with laws and user protection. 

Key Differences in Data Collection

Meta collects and integrates data across multiple platforms, including Facebook, Instagram, and WhatsApp, leading to a broader range of data collection compared to X and TikTok. 

Although X and TikTok collect extensive data, their focus is more on their individual platforms, resulting in Meta having not only more data but more detailed and comprehensive data from across its platforms and user interactions. 

All platforms collect payment information, but the context for collection varies: X collects this data for ads, Meta for marketplace transactions, and TikTok for in-app purchases.

Ultimately, with the extensive amount of personal data being collected by social media platforms, it’s crucial for users to be aware of what data is being collected and how it’s being used.

Data Collection Also Poses Risks for Businesses

Businesses must also be acutely aware of social media platforms. In many instances, social media users are corporate employees who frequently post at work or about work. Posts about company events, partners, or customers, and images containing desks, computer screens, facilities or other proprietary assets put companies at potential risk of exposing sensitive information like customer data and intellectual property.

To help navigate these challenges, Patrick Spencer, spokesperson at Kiteworks has shared the best practices for employees posting on social media:

“While individual consumer behavior is important, the harvesting of social media data can also significantly impact businesses. Unauthorized or inadvertent sharing of sensitive business information on platforms known for extensive data harvesting can lead to security breaches, intellectual property theft, and reputational damage

Additionally, the exposure or unauthorized access of personally identifiable information (PII) through these platforms can expose both employees and their employers to various cyber threats. To mitigate these risks, we strongly encourage organizations to follow these recommendations:”

1.Thoroughly check privacy policies

“The most important thing you can do to protect sensitive data is to adopt a proactive approach to safeguarding digital assets and personal information. It’s pivotal to thoroughly read privacy policies before using any online service, paying attention to key sections such as data collection, usage and sharing. You need to understand what data is collected, how it is used, and who it is shared with.”

2.Avoid sharing sensitive information

“When posting on social media, do not include photos of workspaces where customer, financial, or other sensitive content may be visible on desks or computer screens. Refrain from posting images or descriptions of proprietary equipment or research without explicit permission from your employer.”

3.Use strong security practices

“Organizations should take a ‘zero-trust’ approach to protecting their business, which includes their content. In a zero-trust security approach, no user has unfettered access to all systems. A ‘content-defined zero-trust’ approach takes this model a step further, to the content layer. Organizations can protect their sensitive content when they can see where it sits in the organization, who has access to it, and what’s being done with it. 

Similarly, employees should be cautious with the permissions they grant to apps and third-party integrations. Implement strong, unique passwords for your social media accounts and enable multi-factor authentication where possible. Regularly review and revoke access for any apps that are no longer needed to minimize potential security risks.”

4.Stay informed and educated

“Provide employee training on cybersecurity and best practices for social media use. Stay updated on the latest threats and techniques used in social engineering attacks. Regularly audit and review social media activity across the company to ensure that no sensitive information has been inadvertently shared.”

“By taking these steps and educating employees about the privacy policies of the platforms they use, businesses can mitigate risk and maintain better control over their digital footprint. Protecting personal and business data is not just an individual responsibility but a collective effort that requires vigilance and continuous education.” 

The post Social media platforms that harvest the most personal data appeared first on Cybersecurity Insiders.

In today’s digital landscape, organizations face a myriad of security threats that evolve constantly. Among these threats, human risk remains one of the most significant and challenging to mitigate. Human Risk Management (HRM) is the next step for mature Security Awareness Program, HRM is an approach that focuses on understanding, managing, and reducing the risks posed by human behavior within an organization. Unlike traditional compliance training programs that often rely solely on annual computer-based training, HRM is a comprehensive strategy aimed at securing the workforce by fostering a strong security culture and changing employee behavior.

What is Human Risk Management?

Human Risk Management is a holistic approach to cybersecurity that goes beyond mere awareness. It encompasses various methods and practices designed to understand the human element in security, identify vulnerabilities, and implement strategies to mitigate risks. HRM involves continuous education, regular engagement, and behavior modification techniques to ensure that employees not only understand security policies but also embody them in their daily activities.

The Importance of Human Risk Management

1.Human Error is Inevitable: Despite advancements in technology and automated security measures, human error remains a predominant cause of security breaches. Employees may fall victim to phishing attacks, use weak passwords, or inadvertently disclose sensitive information. HRM aims to minimize these errors by instilling a culture of vigilance and accountability.

2.Dynamic Threat Landscape: Cyber threats are constantly evolving. What was a secure practice yesterday may not be sufficient today. HRM ensures that employees are regularly updated on the latest threats and best practices, making the workforce adaptable to new security challenges.

3.Building a Security Culture: A strong security culture is one where security is ingrained in the organizational ethos. HRM helps in building such a culture by promoting shared values, beliefs, and practices regarding security. This cultural shift is crucial for long-term resilience against cyber threats.

4.Beyond Compliance: While compliance with regulations and standards is essential, HRM focuses on building security into the fabric of the organization. This proactive approach not only meets compliance requirements but also enhances overall security posture.

HRM vs. Traditional Compliance Driven Programs

Traditional compliance programs often consist of periodic training sessions that employees must complete to comply with organizational policies. While these programs are necessary, they are not sufficient for mitigating human risk effectively. Here’s how HRM differs:

1.Continuous Learning and Engagement: HRM is an ongoing process that involves continuous learning and engagement. Instead of one-off training sessions, HRM includes regular workshops, phishing simulations, interactive seminars, and real-time feedback. This constant engagement helps in reinforcing good security practices and keeping security top of mind for employees.

2.Behavioral Change: The core of HRM is behavioral change. It uses psychological principles to understand why employees might engage in risky behaviors and employs strategies to modify those behaviors. Techniques such as positive reinforcement, gamification, and peer influence are used to encourage secure behavior.

3.Role-Based Training: HRM recognizes that one size does not fit all. Different employees have different roles, responsibilities, and levels of access to sensitive information. HRM tailors role-based security training and communication to address the specific needs and risks associated with each role, making the training more relevant and effective.

4.Metrics and Analytics: Effective HRM involves measuring the impact of training and engagement activities. Metrics such as phishing test results, incident reports, and employee feedback are analyzed to assess the effectiveness of the HRM program and identify areas for improvement.

Driving a Strong Security Culture

A strong security culture is the ultimate goal of Human Risk Management. This culture is characterized by:

1.Leadership Involvement: Senior leadership must champion the cause of security, setting the tone for the entire organization. Their involvement demonstrates the importance of security and encourages employees to take it seriously.

2.Open Communication: Encouraging open communication about security issues helps in creating a supportive environment where employees feel comfortable reporting suspicious activities without fear of retribution.

3.Empowerment: Empowering employees with the knowledge and tools they need to protect themselves and the organization is key. This includes not only technical training but also fostering a sense of ownership and responsibility for security.

4.Recognition and Rewards: Recognizing and rewarding employees who demonstrate good security practices can motivate others to follow suit. This positive reinforcement helps in embedding security into the organizational culture.

Conclusion

Human Risk Management is a critical component of an organization’s overall cybersecurity strategy. By going beyond just annual training and focusing on continuous engagement, behavioral change, and building a strong security culture, HRM effectively reduces the risks posed by human behavior. For senior leadership, investing in HRM is investing in the long-term security and resilience of the organization. It is about creating an environment where every employee understands their role in protecting the organization and is committed to maintaining a secure workplace.

Learn more about HRM and securing your workforce in the three-day SANS LDR433 Managing Human Risk course.

 

The post Human Risk Management: The Next Step in Mature Security Awareness Programs appeared first on Cybersecurity Insiders.

The evolving technological landscape has been transformative across most industries, but it’s arguably in the world of finance where the largest strides have been taken. Digital calculators and qualifier tools have made it quick and easy for customers to apply for mortgages and substantial loans in a matter of minutes. Elsewhere, the continued shift towards online shopping in favor of brick and mortar stores means more money is changing hands over the internet than ever before. 

The net result of this is a heightened focus being placed on online cyber security by banks and other types of financial lenders. To no great shock, businesses in this sector are most susceptible to large monetary attacks. In 2023 alone, losses per instance of cybercrime totalled a staggering $5.9 million for financial institutions. 

With it more pivotal than ever to ensure these organizations are doing what they can to stay safe, an increasing number are taking note of what can be done to alleviate the threat of online criminal activity. In this short guide, we’ll discuss some of the best policies to implement. From educating those who you work most closely with, to rethinking how you react to crime, here are four of the best approaches to take. 

Targeting vulnerabilities in the supply chain

Cyber criminals often choose to avoid targeting financial institutions directly, owing to the increasing amount of effort these enterprises are taking to protect themselves. As a result, they’ll look for weaknesses within a supply chain to exploit – usually in the form of a vendor or their software provider.

This is something which needs to be factored into any partnership with a third-party vendor or store. Financial businesses should evaluate the security structure of any of these websites, asking for clear guidance on exactly what measures are being taken to keep financial information safe. Adopting a “Zero Trust” network architecture, where every attempt to access your network is treated as a breach until proven otherwise, is another viable step.

Utilizing strong cyber security software 

Criminals most commonly target their victims’ confidential or private information. This type of attack accounts for 64% of all cyber crimes carried out against financial institutions. The solution here is to guarantee that all software and online firewalls being utilized are as up-to-date and comprehensive as possible. 

This extends beyond just the installation and use of a trusted cyber security software. Measures which financial lenders can take to keep data and other sensitive information safe include: 

  • Securing all components of a network to ensure only approved users are allowed access
  • Following a strict schedule for patching any software issues
  • Regularly reviewing and deleting any unnecessary user accounts 
  • Segmenting critical network components and services 
  • Checking how comprehensive the system is with regular vulnerability scans 

It’s these preventative measures which greatly reduce the chance of falling victim to an attack. 

Educating employees 

Your staff are the beating heart of your organization. Unfortunately, they’re also responsible for a large proportion of financial crimes. It’s estimated that as many as 90% of cyber crimes are made possible because of human error. 

The easiest solution here is to make sure employees are being educated properly through regular security awareness training. This should involve providing clear examples of modern tricks criminals are using, as well as highlighting a detailed breakdown of common scams like baiting, phishing, whaling and scareware.

Having a robust recovery strategy 

While in an ideal world this step wouldn’t ever be necessary, sometimes cyber crime is unavoidable. The best way to counter being a victim is having a strong policy in place to help you immediately deal with and recover from an attack. The quicker this is enacted, the better. 

Techniques to adopt here could be to: 

  • Ensure all incidences and post-attack workflows are clearly documented and accessible
  • Carry out regular cyber recovery exercises, audits, and penetration testing
  • Maintain a good working relationship with federal and local law enforcement agencies to make communication seamless
  • Think about having a cyber insurance policy to help with the immediate financial aftermath 

By knowing how best to react to a breach, a financial lender can mitigate a lot of the more severe resultant issues.

While cyber crime isn’t going to cease any time soon, the combative approach which financial organizations are taking to dampen its impact are helping to keep trillions of dollars safe every year. Make sure to use this guide as your starting point when thinking about your own cyber threat prevention strategy. 

 

The post How do financial lenders avoid cyber threats? appeared first on Cybersecurity Insiders.

The history of artificial intelligence (AI) is a fascinating journey of innovation and discovery that spans decades. From the early days of simple machine learning algorithms to today’s advanced neural networks, AI has become an integral part of our daily lives. AI Appreciation Day celebrated on July 16th, is a testament to this remarkable progress and a day to acknowledge the contributions of AI to society.

As we look back on the milestones of AI, we see a timeline marked by significant breakthroughs that have pushed the boundaries of what machines can do. The development of generative AI, such as ChatGPT, Bing Chat, and Google’s Bard, alongside image creators like DALLE-2 and Midjourney, has brought AI into the spotlight, showcasing its ability to enhance human creativity and decision-making across various sectors. 

AI Appreciation Day not only celebrates these advancements but also encourages reflection on AI’s ethical and security implications. It’s a day to consider how we can continue harnessing AI’s benefits while ensuring its responsible use. As we transition into the AI Age, it’s crucial to maintain a balance between innovation and the protection of our values and privacy. 

The following expert commentary will delve deeper into these themes, offering insights from leaders in the field who have witnessed AI’s evolution firsthand. Their perspectives will shed light on the current state of AI, its potential for the future, and the challenges we must address to ensure its beneficial integration into society. 

Aviral Verma, Lead Threat Intelligence Analyst, Securin 

“We are on course towards Artificial General Intelligence, or AGI, where AI goes beyond imitation and can exhibit human-like cognitive abilities and reasoning. AI that can grasp the nuances of language, context and even emotions. I understand the side of caution, the fear of AI replacing humans. But I envision this evolution to enhance human-AI symbiotic relationships, where its true potential lies in complementing our strengths and weaknesses. Humanity is a race of creators, inventors, thinkers, and tinkerers; AGI can help us be even better at all those things and act as a powerful amplifier for human ingenuity. 

To promote safety for all users and responsible AI deployment, developers must uphold Choice, Fairness, and Transparency as three critical design pillars: 

• Choice: It’s essential that individuals have meaningful choices regarding how AI systems interact with them and affect their lives. This includes the right to opt-in or opt-out of AI-driven services, control over data collection and usage and clear explanations of how AI decisions impact them. Developers should prioritize designing AI systems that respect and empower user autonomy. 

• Fairness: AI systems must be developed and deployed in ways that ensure fairness and mitigate biases. This involves addressing biases in training data, algorithms and decision-making processes to prevent discrimination based on race, gender, age or other sensitive attributes. Fairness also encompasses designing AI systems that promote equal opportunities and outcomes for all individuals, regardless of background or circumstances. 

• Transparency: Transparency is crucial for building trust in AI systems. Developers should strive to make AI systems understandable and explainable to users, stakeholders and regulators. This includes providing clear explanations of how AI decisions are made, disclosing limitations and potential biases, and ensuring transparency in data collection, usage and sharing practices. Transparent AI systems enable scrutiny, accountability and informed decision-making by all parties involved.

The tech industry is on the edge of something truly exciting, and I am optimistic about the advancements individuals and organizations can achieve with AI. To build confidence in AI, we should focus more on Explainable AI (X-AI). By clarifying AI’s decision-making processes, X-AI can alleviate the natural skepticism people have about the “black box” nature of AI. This transparency not only builds trust but also lays a solid foundation for future advancements. With X-AI, we can move beyond the limitations of a “black box” approach and foster informed, collaborative progress for all parties involved.” 

Anthony Cammarano, CTO & VP of Engineering, Protegrity 

“On this AI Appreciation Day, we reflect on AI’s remarkable journey to an everyday consumer reality. As stewards of data security, we recognize AI’s transformative impact on our lives. We celebrate AI’s advancements and democratization, bringing powerful tools into the hands of many. Yet, as we embrace these changes, we remain vigilant about the security of the data that powers AI.

Vigilance takes understanding the nuances of data protection in an AI-driven world. It takes a commitment to securing data as it traverses the complex pipelines of AI models, ensuring that users can trust the integrity and confidentiality of their most sensitive information. Today, we appreciate AI for its potential and challenges, and we renew our commitment to innovating data security strategies that keep pace with AI’s rapid evolution.

As we look to the future, we see AI not as a distant concept but as a present reality that requires immediate attention and respect. We understand that with this great power comes great responsibility, and we are poised to meet the challenges head-on, ensuring that our data—and, by extension, our AI—is as secure as it is powerful. Let’s continue to appreciate and advance AI, but let’s do so with the foresight and security to make its benefits lasting and its risks manageable.” 

Kathryn Grayson Nanz, Senior Developer Advocate, Progress 

This AI Appreciation Day, I would encourage developers to think about trust and purposefulness. Because when we use AI technology without intention, we can actually do more harm than good. It’s incredibly exciting to see Gen AI develop so quickly and make incredible leaps forward. But it’s also a responsibility to build safely with a fast-moving technology. 

It’s easier than ever before to take advantage of AI to enhance our websites and applications, but part of doing so responsibly is being aware of the inherent risk – and doing whatever we can to mitigate it. Keep an eye on legal updates, and be ready to potentially make changes in order to comply with new regulations. Build trust with your users by sharing information freely and removing the “black box” feeling as much as possible. Make sure you’re listening to what users want and implementing AI features that enhance – rather than diminish – their experience. And establish checkpoints and reviews to ensure the human touch hasn’t been removed from the equation, entirely. 

Arti Raman (She/Her), CEO and founder, Portal26 

“Generative artificial intelligence (GenAI) offers employees and the C-suite a new arsenal of tools for productivity compared to the unreliable AI we’ve known for the past couple of decades, but as we celebrate these advancements this AI Appreciation Day, it’s less clear how organizations plan to make their AI strategies stick. They are still throwing darts into the dark, hoping to land on the perfect implementation strategy.

For those looking to make AI work for them and mitigate the risks: 

1. The technology to address burning security questions regarding GenAI has only been around for approximately six months. Many companies have fallen victim to the negative consequences of GenAI and its misuse. Now is the time to ask, ‘How can I have visibility into these large language models (LLMs?).’ 

2. The long-term ability to audit and have forensics capabilities across internal networks will be crucial for organizations wanting to ensure their AI strategies work for them, not against them.  

3. These core capabilities will ultimately drive employee education and knowing how AI tools are best utilized internally. You can’t manage what you can’t see or teach what you don’t know. Having the ability to see, collect and analyze how employees use AI, where they’re most using it and what they’re using is invaluable for long-term strategy.  

AI has marked a turning point globally, and we’re only at the beginning. As this technology evolves, so must our approach to ensuring its ethical and responsible usage.” 

Roger Brulotte, CEO, Leaseweb Canada 

“In an age where “data readiness” is crucial for organizations; the rapid adoption of AI and ML highlights the need of cloud computing services. Canada stands as a pioneer in this technological wave, with its industries using AI to drive economic growth. Montreal is quickly establishing itself as an AI hub with organizations like Scale AI and Mila – Quebec Artificial Intelligence Institute. 

Companies working with AI models need to manage extensive data sets, requiring robust and flexible solutions to manage complex tasks, training large datasets and neural network navigation. While the fundamental architecture of AI may remain constant, scaling the components up and down is essential depending on the model’s state. As the data-driven landscape keeps evolving, organizations must select data and hosting providers who can keep up with the times and adjust as needed, especially as Canada implements its spending plan to bolster AI on a national level. 

On AI Appreciation Day, we recognize that superior AI outcomes are powered by data, which is only as effective as the solutions that enable its use and safeguarding.” 

Steve Wilson, CPO, Exabeam 

“My recognition of AI Appreciation Day is part celebration, part warning for fellow AI enthusiasts in the security sector. We’ve seen AI capabilities progress dramatically, from simple bots playing chess, to self-driving cars and AI-powered drones with a spectacular potential to truly change how we live. While exciting, AI innovation is often unpredictable. Tempering our enthusiasm is the sobering reality that rapid progress — while filled with opportunity — is never without its challenges.  

The fascinating evolution of AI and self-learning algorithms have presented very different obstacles for teams in the security operations center (SOC), to combat adversaries. Freely available AI tools are assisting threat actors in creating synthetic identity-based attacks using fraudulent images, videos, audio, and more. This deception can be indistinguishable to humans — and exponentially raise the success rate for phishing and social engineering tactics. To defend, security teams should also be armed with AI-driven solutions for predictive analytics, advanced threat detection, investigation and response (TDIR), and exceptional improvements to workflow. 

Before jumping headlong into the excitement and potential of AI, it’s our responsibility to evaluate the societal impacts. We must address ethical concerns and build robust security frameworks. AI is already revolutionizing industries, creating efficiencies and opening possibilities we never could have imagined just a few, short years ago. We’re only getting started and by proceeding with cautious optimism, we can remain vigilant to the obvious risks and avoid negative consequences, while still appreciating AI’s many benefits.” 

 Anthony Verna, SVP and GM, Cubic DTECH Mission Solutions  

“In the ever-evolving landscape of modern warfare, the role artificial intelligence (AI) places in dictating the trajectory of military operations must be emphasized. As we continue to see the complexities of an AI-accelerated battlespace intensify, AI combined with Machine Learning (ML) and advanced data processing have become indispensable to ensure the success of critical missions. 

It’s also essential to recognize how vital next-generation tactical edge-based technologies are in providing decision advantage and how AI’s integration at the edge marks substantial advancement in military operations. The capability to process and interpret data instantaneously at the point of collection offers commanders prompt, actionable insights, facilitating rapid and well-informed decisions. 

Modern operations demand immediate and precise data-to-decision capabilities to support mission-critical decisions at the swift pace of conflict today. This edge-based approach is crucial in denied, disrupted, intermittent, and limited (DDIL) environments where traditional communication channels may be compromised or unreliable.  

As we celebrate AI Appreciation Day, let us acknowledge AI’s profound impact on our military capabilities, ensuring our forces are equipped with the most advanced technology to face the challenges of modern warfare and maintain a strategic advantage.” 

Dave Hoekstra, Product Evangelist, Calabrio  

AI Appreciation Day is a day to honor the past and present accomplishments of artificial intelligence (AI). AI is not a novel creation, but a product of decades of inquiry and invention. It improves our lives and efficiency, by allowing us to interact and obtain information quicker and easier than ever.  Recent AI breakthroughs have opened up exciting opportunities in education and innovation, providing powerful tools to analyze data and act on insights like never before.  

From early chatbots to advanced voicebots, contact center customers have interacted with AI technology. But the latest innovations in AI helps companies make sense of the data customers provide, like reviews, surveys or calls. Modern models can offer human and virtual agents ongoing feedback on customer interactions to improve them. Workstation copilots can also work with agents and help them find answers. While a helpful human touch will always be required in the contact center, these AI enhancements are becoming more and more essential for agents to perform their jobs effectively and to create a positive customer experience.   

While the contact center is poised for significant improvements with AI, there are still important questions remaining: How do we make sure AI tools are impartial, transparent and accountable? How do we maintain a human-focused and cooperative method of customer service? These are some of the challenges we are addressing as we work towards a more advanced, AI-driven future in the contact center. 

Cris Grossmann, CEO and founder, Beekeeper  

Each year, AI Appreciation Day serves as a reminder to embrace the transformative and powerful potential AI holds for frontline industries. The adoption of AI-powered tools by frontline businesses can provide managerial visibility, which is crucial for a more connected frontline workforce. Automated features like real-time evaluation of employee sentiment allow companies to proactively address concerns and prevent employee burnout. Utilizing AI to gauge employee sentiment not only improves retention and engagement but also unlocks new levels of operational efficiency that traditional methods cannot achieve.   

No matter how many advancements in technology we make, AI will never be able to replace frontline workers. But it does have the power to enhance the experience of both frontline workers and managers through smart, people-first strategies.  

 

The post AI and Ethics: Expert Insights on the Future of Intelligent Technology appeared first on Cybersecurity Insiders.

Information security policies are a table-stakes requirement for any significantly sized organization today but too often they are a mess composed of checkbox lists describing off-the-peg policies. CISOs now recognize the importance of a security policy document not just as a reference to win deals or a pass to get through an audit process. A good policy document can drive best practices, sharpen attitudes, and add real value. 

The average security policy document may be worth a little more than the paper it is written on, but it can be just as dry. Often, a standard template is used, and the tone tends towards techno-speak and legalese. Few people read them properly, and relevant information is not easily discovered. 

It can be challenging to corral all the necessary materials. Take Unit4. At one time it had multiple and disparate policies, which was probably not that exceptional compared to other large and growing organizations, which had accumulated various approaches to security guidelines through mergers and acquisitions. The  approach to compliance with standards like ISO27001 and others was done well, but in a disparate way, as different parts of the organization would have their own certification with limited scope leading to a complex situation. With collaboration and goodwill along the way, however, my team has been able to simplify and unify the certification scopes to get to 21 succinct policy points, in plain English were at all possible. Each scope was backed by a sponsor from the global leadership team with each designed to allow information to be found and understood quickly.

What we have created is a logical construct for cybersecurity policy documents. These are aligned to Unit4’s global strategy, supported by efficient processes, procedures, standards, with guidelines in place on how to get the desired outcomes of the Policies. Other organizations may take a difference approach based on their risk appetite and tolerances. Some will accept more risk than others, depending on sector, culture, and other factors: for example, we give R&D power users more rein to explore new technologies because innovation is critical to our culture and to delivering customer success.

But whatever the organization, the Information Security Framework, core security policy and supporting documentation, is critical. It lays out the attitude to security, governance processes and a simple map that points people to where they go if things happen, or they need to ask an expert. Similarly, leaders must have tools in place to educate staff, disseminate information, and execute escalation plans for when things go awry.

Over the course of this journey, we compiled some learnings on harmonizing security policies, and I wanted to share some of the key takeaways:

More than a mandate

Of course, a policy document needs to show how an organization abides by legal demands, standards, or regulatory mandates… but that need not make it dull. A strong security policy should have underpinning guidelines, processes and procedures to link those demands to the practical ways they are being met. All documentation should be presented in a readable form and be actionable in the real world rather than being filed away like a tax return. 

Standards like ISO9001 or ISO27001 are important, but saying an organization adheres to them is not the be-all and end-all of the policy document. The Information Security Management System should add real value to the business that has implemented it.

Certification is critical but in a well-structured and expressed policy document it should be an automatic outcome of the actions you are taking. Therefore, a policy document should be seen as a guide to desired outcomes and a means to apply good ideas. 

Divide and conquer

Start with the human factor. There shouldn’t be a one-size-fits-all approach. There will be varied levels of knowledge and areas of specialization in an organization, as well as different types of information that individuals will need to know. So, it helps if policies are split up into sections and searchable. Plain language should be used so facts are not obscured. A complex matter such as how an organization manages encryption may require some more jargon, whereas an Acceptable Usage Policy should be easy to understand. 

Policy alone won’t cut it

At Unit4 we wanted to create general policies that are then supported by processes, Procedures, standards, and guidelines. Processes are interdepartmental and need to be followed to ensure that an end-to-end result is achieved – such as the Joiners, Movers and Leavers (JML) Process which explains the full lifecycle of an employee at Unit4.  Departments should have procedures that they use to get things done, such as those used in development for testing code.   All of this is supported by guidelines which explain how the organization needs people to act, or how to create a procedure or end to end process. This ensures that there is quality in everything we do, and all interactions are done in a secure way. 

Living documentation 

The documentation should be updated as appropriate, and as things change, in line with the principles of continuous improvement. For example, Unit4 has been implementing new policies and guidelines for the appropriate use of AI. After studying the security rights required by some popular tools and finding some of them lacking, we quickly improved our policies to restrict access to insecure tools. And of course, a policy should be backed up by constant monitoring processes to guarantee incidents are logged and plans are being enacted – which is done by our internal audit process.  This is why we conduct phishing tests once per quarter and we know that fewer than five per cent of our staff have been caught out and they are given follow up training. 

Continue to innovate

Policy can be used to advise people as to what they can and cannot do, apply rules in tools used by employees to ensure they are enforced, and then intervene on an individual basis only where necessary. But as CISOs we should always be looking to innovate. With the technologies now becoming available, we should all be on a mission to automate wherever possible to reduce the scope of human error and to ensure optimal performance of security strategies.

By considering the security policy document as something more than a regulatory commitment or chore, you may discover that it is actually a weapon for competitive differentiation. Maybe it’s time to dig out that document and take a fresh look.

 

The post Ditch the Checkbox, Use Plain Language, Make It Real: How to Create an Information Security Policy That Works appeared first on Cybersecurity Insiders.

The human element is one of the biggest reasons why data breaches have risen in recent years. And even though most organizations have some level of security awareness training already in place, employees continue to fall prey to phishing attacks or are found guilty of not following security best practices. Only user training can definitively undermine social engineering and phishing scams. Unfortunately, most organizations tre security training as a mandatory check-box exercise without actually focusing on what its audience needs, wants, or expects out of training. 

How Does User Experience Relate to Security Awareness Training?

User experience is the one and only true currency of cybersecurity awareness training (SAT) – it is that feeling, whether positive, negative, or indifferent, that employees get after every training session. This experience can have a direct impact on engagement, learning, attentiveness, and ‘virality’ – when employees voluntarily suggest training to their colleagues. A rich user experience can help shape the security mindset, behaviors, norms and attitudes across the organization. 

What Elements Are Needed To Create A Good User Experience In Security Training?

Achieving a good user experience is not easy. It is a deliberate effort that requires meticulous attention to several components of training. Namely, active involvement from all stakeholders plus an ongoing process whereby training processes are continuously refined based on user feedback. Below are some elements that organizations must work on to improve the SAT user experience:

1. Content Quality: Content always matters in training. If your content is boring and monotonous or lacks context, relevance and creativity, then this will negatively impact the user experience. In contrast, organizations can practice storytelling, using recent and relevant examples pulled from daily headlines recounting massive data breaches and multi-million-dollar extortion settlements. Using tools that make the learning process more interactive and personalized can also significantly boost the training experience. 

2. Training Frequency: If training sessions progress too long most people will get fatigued and stop paying attention. Moreover, employees tend to forget infrequent training. Therefore, it is advisable that organizations conduct training at regular intervals and shorten its duration. Employees will engage with the content with more interest and retain it better.

3. User-centric Design: Technology touchpoints such as having a user-friendly training portal, a seamless single sign-on function (that avoids making users re-enter credentials), automated workflows and reminders, a real-time view of employee training progress, and a simple browser button to report and quarantine phishing messages, can all significantly enhance the overall user experience.

4. Real-world Testing: Receiving textbook knowledge about cybersecurity threats and encountering a cybersecurity threat are two completely different things. Using phishing simulation tools, employees are subjected to mock phishing attacks in hopes they learn to identify these culprits and report them before they spread and cause harm to the organization. Simulated phishing exercises can also build confidence in handling real-world threats. 

5. Positive Reinforcement: Employees are not cybersecurity experts. Many will find it hard to grasp security jargon. Organizations must exercise empathy, not reprimand or disrespect employees for making mistakes. Some employees may need one-on-one coaching. Recognizing these needs will help make employees feel more comfortable around security and the handling of incidents while bettering their training experience.

6. Flexibility: Employees also have a day job they need to finish. Trust employees to complete training on their own time and at their own pace without overly stressing them with tight deadlines. When employees have more flexibility for learning they are apt to be more receptive to understanding new procedures and concepts.

7. Games / Incentives: Who says training can’t be fun, collaborative, rewarding and motivating? To elevate the SAT user experience, security teams, with help from HR, can run promotions and contests between departments (e.g., which team detects the most phishing emails), offer generous freebies like t-shirts or coffee vouchers, highlight good security deeds at company meetings, and run other events from the HR/marketing playbook.

8. Communication / Feedback: Communication is a core ingredient for team collaboration. Clear and consistent communication from the top-down shows commitment from leadership for cybersecurity. Moreover, communication is never a one-way street. By establishing a feedback mechanism and making continuous improvements, employees feel heard and valued, which in turn can result in positive feelings and a good user experience.

Good user experience begins with an understanding of how people think and operate. Empathy, understanding, and feedback are important cornerstones. If organizations pay heed to these best practices when building their security awareness programs then they will not only deliver better learning outcomes but also foster a healthy security culture, one resilient against phishing and social engineering threats. 

 

The post Why User Experience Matters In Security Awareness Training appeared first on Cybersecurity Insiders.

Since the SEC’s updated Cybersecurity Disclosure rulings came into force in December, unsuspecting CISOs have seen a sudden shift in the pressures they are under. Not only are they under the burden of additional cybersecurity reporting, but sharing reports that turn out to be misleading, or even inaccurate, could result in legal action. 

We need to remember that cybersecurity – and so the CISO’s role itself – is a relatively young vocation. Travel back 20 years and for most cybersecurity was part of the wider IT function, focused purely on perimeter defenses and technical controls. Wind back to the present and we’ve seen a constant, and ongoing shift as CISOs have become a central component of business governance and risks. This means not only understanding cybersecurity practices and posture accurately, but then accurately and effectively communicating that to the board and Enterprise Risk Management (ERM) team. Understanding the SEC’s new rules will be essential to enabling this factual, data-driven approach. 

Between a rock and a hard place?

At their heart, the changes seem straightforward. Listed companies’ 8-K filings – reports announcing major events shareholders should know about – and 10-K filings – comprehensive annual reports of critical information including financial performance – need to portray cybersecurity posture accurately. However, the definition of what demands an 8-K filing has also expanded. Now “material cybersecurity incidents” need to be reported in a timely fashion, in this case within four days of determining whether an incident was “material”. 

There is a clear value to these regulations. They increase transparency, and by covering cybersecurity posture allow investors to better understand the risk level of their investments. But they also represent the growing regulatory burden on enterprises: alongside the European GDPR, California Consumer Privacy Act, and a wealth of privacy laws facing organizations looking to do business in Australia, Brazil, China, or the EU. 

This in turn puts organizations in a crossfire. The SEC will punish inaccurate, or worse misleading, reporting, while investors may balk at a report that, while accurate, presents what they see as increased risk. CISOs need to help legal counsel and others who have to make disclosure decisions, make sure they don’t become a target.

Twin dilemmas

Specifically, CISOs need to deal with two additional burdens. First is the threat of legal action. If a CISO’s reports are seen to mislead investors about their susceptibility to risk, they will be in the SEC’s sights. A failure to accurately report allegedly known cybersecurity risks and vulnerabilities has already seen CISOs facing fraud charges.  

Second, the reporting burden from 8-K and 10-K reports will inevitably increase. Working closely with the ERM team will be crucial to making sure reports are accurate. The good news is that regulators recognize that, in cybersecurity, nothing is fail-proof. If CISOs can prove they have the right controls in place, and critically that these controls are continually monitored to ensure that they have been implemented correctly and are working as they should, they can insulate themselves from risk. The bad news is that this may be easier said than done – especially as the volume of reports soars.

Looking at the numbers

To measure whether the reporting burden on CISOs had increased, and whether enterprises were putting themselves at risk, we analyzed SEC cyber disclosures from the first half of 2023, and compared to those from the first six months of the new regime.

There is no doubt that cybersecurity is a major element of 10-K filings. With listed companies now feeling obliged to include their posture in filings, mentions of NIST (National Institute of Standards and Technology) increased from 221 in 2023 to 3,025 in 2024. This represents an increase of nearly 14 times year-on-year, and the number of disclosures passing 4,000 by December wouldn’t be a surprise.

Conversely, 8-K filings told a different story. The number of reported cyberattacks is consistently growing, and most enterprises might be expected to be hyper-alert of any potential breach, and report it as such. Yet we only found 17 potentially material cybersecurity incidents, across 4,000+ listed US companies. 

Only a fraction of a percent of all companies reporting a “material” incident seems unlikely. Even more so when you consider that none of these 17 would confirm that the incident was severe enough to be counted as “material”. The most worrying conclusion is that there is a mass of material incidents waiting to be discovered. 

Defusing the time bomb

This growing burden shouldn’t be an issue for CISOs if they can understand and communicate their posture. The challenge is that, while Business Intelligence and analytics tools have been commonplace in finance, sales, and leadership for decades, CISOs have been left to scrounge data from disparate tools with no single, trusted view. Without a clear view of risk, it’s near impossible to turn that picture into clear action and strategy, translate risk into business vernacular, and influence the necessary people. 

Security teams need to validate the data they are working from using multiple sources to reach a single source of truth. By shining a light on coverage gaps, and giving context to threats, businesses can improve governance and risk reporting and mitigate cyber harms. Ultimately, whether reporting to the ERM team, investors, or the SEC, the CISO can use a language their audience will understand and ensure that everyone is held accountable.

This won’t only reduce the reporting burden, and give investors greater confidence. It will also ensure that breaches cannot become a time bomb under the organization, waiting to be detonated by one incorrect 8-K or 10-K report. 

The post Six months into new SEC rulings, can enterprises escape the crossfire? appeared first on Cybersecurity Insiders.

CISA’s recent guidance to shift from VPNs to SSE and SASE products strengthens data protections, but misses an opportunity to champion more robust, hardware-enforced, security controls to harden access points like web browsers.

Acting in the wake of several major vulnerabilities against VPN products at the beginning of this year, the US Cybersecurity and Infrastructure Security Agency (CISA), along with its Canada and New Zealand counterparts, released its recommendations on shifting from VPNs to Zero-Trust solutions such as Secure Service Edge (SSE) and Secure Access Service Edge (SASE) products. By creating architectures around Zero-Trust Network Access (ZTNA) principles, organizations can ensure that additional identity- and risk-based controls are in place to protect access to sensitive resources and data. These protections can make it significantly harder for adversaries to encrypt valuable databases for ransom, steal the information contained in those databases, or both, particularly once an adversary has evaded or subverted the security software designed to keep them out in the first place.

After all, there’s little evidence to show that the underlying software mechanisms of ZTNA platforms are less vulnerable than VPN software – a quick scan of CVEs in 2024 so far reveals no fewer than 10 vulnerabilities in supposed “Zero-Trust” security services, often with significant consequences including code execution and security bypass. Such vulnerabilities can provide adversaries using low-visibility techniques like “living off the land” with points of presence in organizations’ systems that long outlive the vulnerabilities themselves, even if maneuvering for effect within an organization’s network will be significantly more difficult in a zero-trust environment.

Even as they implement CISA’s guidance to minimize the potential impact of a breach, organizations should be asking a more fundamental question: how do we keep adversaries out of networks in the first place? One answer is found in CISA’s guidance, though only in a cursory way: the use of hardware-enforced network segmentation, such as unidirectional gateways and data diodes, to shield the most sensitive systems in the network. Because hardware-enforced segmentation technologies make it physically impossible for data and code to travel from risky environments to sensitive systems, the chance that an adversary will be able to leverage their current presence in the network or even throw code from outside the network to compromise one of these sensitive systems is near 0. 

As a result of the high level of security that hardware-enforced solutions provide, CISA recommends them as a control for sensitive operational technology (OT) systems like the ones that form the backbones of utility networks. But as the 2021 ransomware attack on Colonial Pipeline demonstrated, attackers don’t need to make it into OT systems to have a massive impact on a utility – they simply need to have enough presence in IT networks to cripple operations or pose an unacceptable security risk. Minimizing this impact means thinking differently about using hardware-enforced isolation mechanisms and using them not only to secure the organization’s most sensitive systems, but also to shield the organizations from the riskiest networks and applications – for example, the Internet, with over 1 billion websites of largely unevaluated code, and web browsers, agile and feature rich yet insecure apps that had, on aggregate, 19 zero-day vulnerabilities in the last year alone. 

Business priorities mandate that the vast majority of users have more or less unfettered access to the information hosted on the open Internet. In the energy industry, for example, traders may have to conduct Internet research on geopolitical, weather, and logistical conditions around the globe; in finance, analysts may have to conduct sensitive diligence operations to support mergers and acquisitions. The pace of these activities is so rapid that cybersecurity teams don’t have the time or resources to evaluate each site individually, so they instead rely on third party-generated lists of “known bad” sites and block them or, in the best case, third-party generated lists of “known good” sites that do not host malicious code to allow into the network. Yet both the “known bad” and “known good” lists rely on knowing what “bad” looks like – and, as evidenced by the nearly 20 zero-days against browsers in 2023, the definition of “bad” continually changes.

Instead of playing the cat and mouse game of identifying “bad” before it impacts the systems of major organizations, hardware-enforced browser isolation solutions keep all but the most explicitly trusted activities off an organization’s systems. Instead, risky browsing is conducted on cloud-hosted processors and converted to an interactive video stream and sends keystrokes and mouse movements back via the same types of one-way, fixed-function hardware that are used to protect OT networks. By applying this type of technology to web browsing, organizations can remove one of the least-secure points of access for malicious code and one of the hardest-to-secure points of egress for stolen data in one stroke.

CISA is right in recommending a shift to controls like SSE and SASE solutions to take a more granular approach to data access within corporate networks, and CISA is right to call for more robust controls like hardware-enforced segmentation for the most sensitive networks. But leaving protection against the largest, highest-threat network on the planet – the open Internet – to software-based solutions that can be subverted makes it a question of when, not if, that software too is compromised. Enforcing security using hardware at both the highest-risk and highest-sensitivity portions of the network provides a more assured option.

The post CISA Guidance Strengthens Data Security, Neglects Web Access Security appeared first on Cybersecurity Insiders.

In recent years, the landscape of remote work and cybersecurity has undergone significant changes, driving organizations to reevaluate their reliance on traditional Virtual Private Networks (VPNs). The 2024 VPN Risk Report, compiled by Cybersecurity Insiders in collaboration with HPE Aruba Networking, provides an in-depth analysis of the challenges associated with VPNs and highlights the growing shift towards Zero Trust Network Access (ZTNA) as a more secure and efficient alternative.

The Limitations of VPNs in Modern Work Environments

VPNs have long been the cornerstone of remote access solutions, offering a secure tunnel for data transmission between remote users and corporate networks. However, the report underscores several critical limitations becoming increasingly problematic as organizations adopt more dynamic and distributed work models.

1. Security Vulnerabilities: A staggering 92% of survey respondents expressed concern that VPNs might compromise their ability to maintain a secure environment. VPNs often provide broad access to corporate networks, meaning that once a malicious actor breaches a VPN, they can potentially access sensitive data and systems with little restriction. This wide-access model is at odds with the principle of least privilege, which is central to modern cybersecurity practices​​.

2. User Dissatisfaction: The report reveals that 81% of users are dissatisfied with their VPN experience, citing issues such as slow connections and frequent disconnections. This poor user experience affects productivity and increases the likelihood of users seeking unsecured workarounds, further jeopardizing security​​.

3. Management Complexity: Managing VPNs can be complex and resource-intensive. With 65% of organizations operating three or more VPN gateways to support their remote users, the administrative burden on IT teams is significant. This complexity can lead to configuration errors and oversight, further elevating security risks​​.

4. Scalability Issues: As organizations grow and their remote workforces expand, scaling VPN infrastructure to meet these demands becomes increasingly challenging. The high costs associated with scaling and maintaining VPNs and their inherent limitations make them a less viable solution for large, dynamic enterprises​​.

The Rise of Zero Trust Network Access (ZTNA)

Given VPNs’ limitations, many organizations are turning to ZTNA as a more robust solution for secure remote access. ZTNA operates on the principle of “never trust, always verify,” ensuring that every access request is continuously authenticated and authorized based on various contextual factors.

1. Enhanced Security: Unlike VPNs, ZTNA enforces granular access controls, allowing users to access only the resources they need for their roles. This minimizes the attack surface and reduces the risk of lateral movement within the network in case of a breach. The report indicates that 75% of organizations view Zero Trust as a priority, recognizing its potential to enhance security in a distributed work environment​​.

2. Improved User Experience: ZTNA solutions are designed to provide seamless access to applications regardless of the user’s location without the performance issues commonly associated with VPNs. This leads to higher user satisfaction and productivity. By leveraging cloud-native architectures, ZTNA can offer more reliable and faster connections than traditional VPNs​​.

3. Simplified Management: ZTNA reduces the complexity of managing remote access by centralizing policy enforcement and leveraging automated tools for monitoring and threat detection. This streamlined approach allows IT teams to focus on strategic initiatives rather than routine maintenance tasks​​.

4. Scalability and Flexibility: ZTNA solutions are inherently scalable, making them suitable for organizations of all sizes. They can easily accommodate an increasing number of remote users and integrate with various cloud services and applications. This flexibility is crucial as more businesses adopt hybrid and multi-cloud environments​​.

The Future of Remote Access

The 2024 VPN Risk Report provides compelling evidence that the era of VPNs as the primary solution for remote access is waning. With 59% of organizations having adopted or planning to adopt ZTNA within the next two years, the shift is well underway​​.

As cyber threats evolve and the demand for secure, efficient remote access grows, businesses must reassess their current strategies and consider more modern solutions like ZTNA. This transition enhances security and aligns with the broader digital transformation goals of agility, scalability, and user-centric design.

Conclusion

The insights from the 2024 VPN Risk Report highlight a critical inflection point in remote access and cybersecurity. The persistent issues associated with VPNs—security vulnerabilities, poor user experience, management complexity, and scalability challenges—underscore the need for a more practical approach. 

Zero Trust Network Access (ZTNA) emerges as a compelling alternative, offering enhanced security, improved user experience, simplified management, and greater scalability. As organizations navigate this transition, ZTNA adoption is poised to become a cornerstone of modern cybersecurity strategies, ensuring robust protection in an increasingly interconnected world.

 

The post The Shift from VPNs to ZTNA appeared first on Cybersecurity Insiders.