Phishing is an ever-growing concern in cybersecurity. It was the most common attack type in 2023, accounting for 43.3% of email-based threats – and its danger has been supercharged by the rise of generative AI. Businesses are right to be worried.

GenAI has transformed the global cybersecurity landscape. As it evolves, criminals are using it to launch increasingly sophisticated attacks with alarming ease. Despite this, companies can protect themselves from falling victim to an attack and avoid becoming one of the many entities that collectively spent $1.1 billion globally on ransomware payments last year.

Why so many attacks?

Email has been a staple of communication for decades, becoming almost second nature in both personal and professional contexts. This familiarity, however, has led to complacency, making email the perfect channel for cybercriminals.

One malicious application of genAI in emails is to impersonate trusted companies. In 2023, e-commerce and delivery companies DHL (26.1%), Amazon (7.7%), and FedEx (2.3%) were ranked in the top ten most popular choices for impersonations. Typically, these attacks come with urgent messages asking for an equally swift response to verify an “anomalous transaction” or “validate a delivery”. This encourages hurried action, preventing the recipient from taking time to consider the source of the message, and is typically done to gain access to a person’s private information for financial gain or blackmail.

Cybercriminals can also use genAI in a more sophisticated way, to create malicious spear-phishing emails, generate spoofing kits, or even learn how to use more advanced tools like multi-factor authentication (MFA) bypass kits, and even how to generate ransomware. Ransomware remains a major issue as evident in the LockBit ransomware attack earlier this year, amongst many others.

There is more however, genAI has been a boon for many threat actors, and the number of possible ways it can assist an attacker is staggering.

How threat actors use genAI

Threat actors have become adept at leveraging genAI in many ways, but here are some of the most common:

1. Open-source intelligence (OSINT) involves gathering information from publicly available sources, such as security forums, news articles, and social media, in order to impersonate a service or contact in an attempt to add legitimacy and authority to a communication. GenAI has made this process significantly easier. AI’s capability to scour and analyse vast amounts of data quickly means attackers can compile lists of targeted information that they can use to further stage attacks. 

2. Attack Chain Assistance can be provided by popular LLMs to help cybercriminals learn how to build all stages of a given attack. This is particularly helpful for novice threat-actors. In order to levy an attack against a target, the attacker needs to know every step of the attack. GenAI can not only help the attacker learn about a given attack method, it can also help them learn how to carry it out. This provides the knowledge needed to launch attacks to a whole new generation of hackers that prior to genAI lacked the knowledge to do so.

3. Spear phishing generation involves highly customised lures sent in a meticulously targeted way. Criminals research and use detailed knowledge of their targets to ensure higher success rates. In these attacks, users are baited to click links, download attachments, or enter their login details to a fake but genuine-seeming sign-in page.  These attacks aim to derive success through their highly customised nature. MFA bypass kits, such as Evilginx and W3LL panel, have become more commonly paired with spear-phishing attempts.GenAI streamlines the process of setting up the delivery mechanism (an email) with a link to a reverse-proxy service that effectively becomes an adversary-in-the-middle attack. These kits present a convincing login page and capture session cookies during MFA prompts. After this, it redirects the user to the legitimate page, none the wiser to what has happened. 

What should organisations do?

GenAI attacks are on the increase and extremely concerning, but they can be addressed. With the right knowledge, products and expertise, companies can build a robust defence against this evolving threat landscape. 

Employees are an important line of defence, so companies must invest in ongoing employee education, training and awareness of these attacks to create a secure system. However, Hornetsecurity research indicates that 26% of organisations still provide no training, which places them at risk. 

Training should be regular and engaging to ensure employees can identify, deflect and report potential attacks. By combining robust technical defences with an empowered workforce, an organisation can establish a culture where security is important for all employees, regardless of their position.  

Cybersecurity providers are also using AI to counteract these threats. Many now offer comprehensive next-gen protection packages aimed at helping organisations strengthen their defences with AI support. 

One thing to remember is that humans initiate AI attacks, and phishing will always be phishing, regardless of how cleverly disguised it is. AI-enabled attacks are based on known tactics and current technology (so far), which means they have limitations. They can largely be recognised and blocked by email security tools and for the few that make it through, it’s a matter of employees being aware of what to do.  

Hornetsecurity’s mission is to continue to stay ahead of the AI game. We empower organisations of all sizes to focus on their core business, while we protect their email communications, secure their data, help them strengthen their employees’ cybersecurity awareness, and ensure business continuity with next-generation cloud-based solutions. 

 

 

The post The new face of phishing: AI-powered attacks and how businesses can combat them appeared first on Cybersecurity Insiders.

On June 3, the public comment period closed for the U.S. Cybersecurity & Infrastructure Security Agency’s (CISA) Notice of Proposed Rule Making (Proposed Rule) under the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA). CISA now has until October 2025 to make any modifications and publish its Final Rule. 

What is CIRCIA?

CIRCIA was signed into law in March 2022 in response to a growing number of cyber threats and attacks on entities operating within certain critical infrastructure sectors. Under CIRCIA, companies within 16 critical infrastructure sectors will be required to report substantial cyberattacks within 72 hours “after the company reasonably believes the incident has occurred.” Ransomware payments must also be reported within 24 hours of being made. Companies must also retain certain documents for two years following the incident. CISA’s 447-page Proposed Rule, published this April, set forth the criteria for determining which companies are covered and which incidents must be reported. 

Who Must Report? 

CIRCIA applies to companies operating in “a critical infrastructure sector,” but the law itself does not define which companies are within those sectors. The Proposed Rule commentary, however, indicates that the definition is tied to the sector descriptions in the critical infrastructure Sector-Specific Plans that were developed in 2015 under Presidential Policy Directive 21. 

The Proposed Rule further clarifies that for the reporting obligations to apply, the company must not only operate within one of the 16 critical infrastructure sectors, but also either (1) exceed the U.S. Small Business Administration’s (SBA) small business size standard or (2) meet certain sector-based criteria for 13 of the 16 critical infrastructure sectors. These sector-based criteria are independent from the SBA criteria. For example, healthcare facilities with fewer than 100 beds are not required to report incidents, but “critical access” hospitals would be required to report, regardless of size. (The Proposed Rule does not include any sector-based criteria for the Commercial Facilities, Dams, or Food and Agriculture sectors.)

 What Must Be Reported and When? 

 The Proposed Rule generally requires companies to report “substantial” cyber incidents 72 hours after the company “reasonably believes” a covered cyber incident has occurred and 24 hours after a ransom payment has been made. 

When is a cyber incident “substantial”?

A cyber incident would be “substantial” if it leads to any of the following:

  1. A substantial loss of confidentiality, integrity, or availability of an information system or network.
  2. A serious impact on the safety and resiliency of operational systems and processes.
  3. A disruption of a company’s ability to engage in business or industrial operations or deliver goods or services.
  4. Unauthorized access to information systems or networks, or any nonpublic information contained therein, that is facilitated through or caused by either a compromise of a cloud-service provider, managed-service provider or other third-party data hosting provider, or a supply chain compromise. 

Examples of substantial cyber incidents include: (1) a distributed denial-of-service (DDoS) attack that renders a company’s services or goods unavailable to customers for an extended period, (2) a cyber incident that encrypts a core business or information system and/or (3) unauthorized access to a company’s business systems using compromised credentials from a managed-service provider.

But not every incident will be reportable under the Proposed Rule. For example, a DDoS attack that only results in a brief period of unavailability of a company’s website that does not provide critical functions or services to customers would not require a report, nor would a cyber incident that results in only minor disruptions or the compromise of a single user’s credential. Malicious software being downloaded also would not be reportable if antivirus software successfully precludes it from executing. 

So, for example, a DDoS attack that only temporarily stops customers from visiting a company’s website would not be substantial, whereas, a similar DDoS attack with significant downtime for critical functions would meet the criteria.

When must a company report an incident? 

The Proposed Rule recognizes that a company’s “reasonable belief” that a covered incident has occurred is subjective. In many cases, a company will need to perform some “preliminary analysis” before reaching a reasonable belief that a cyber incident has occurred. CISA indicated in its Proposed Rule, however, that any preliminary analysis “should be relatively short in duration (i.e., hours, not days) before a ‘reasonable belief’ can be obtained, and generally would occur at the subject matter expert level and not the executive officer level.”

What must a company report? 

Under the Proposed Rule, a company would be required to submit incident reports on a web-based portal, including all of the following:

  • A narrative description of the incident, including the impacted information systems, a timeline of the incident, and its operational impact.
  • A description of any vulnerabilities, as well as the covered entity’s security controls.
  • The tactics, techniques, and procedures (TTPs) used by the perpetrator and any associated indicators of compromise.
  • Whether any third parties, including law enforcement, were engaged for assistance, and the identities of those third parties.

For ransom payment reports, CISA requires similar information plus details about the ransom demand amount, the date of the payment, the amount paid, and the outcome of the ransom payment.

What Does CISA Do with the Information? 

With new reporting obligations come concerns about how the disclosures might be used. Although they do not affect a reporting company’s liability for the incident itself, CIRCIA provides certain protections for these reports. Information provided to CISA may only be disclosed or used by a federal agency for (1) a cybersecurity purpose; (2) identifying a cybersecurity threat or a security vulnerability; and (3) responding to or mitigating a specific threat of death, serious bodily harm, or serious economic harm. No enforcement action may be taken based solely on the submission of a report or response to a request for information from CISA. In addition, reports, responses, and related communications may not be admitted as evidence, subjected to discovery or used in any legal proceedings. A covered entity may designate its report as “commercial, financial, and proprietary information” if it desires, and reports are exempt from disclosure under the Freedom of Information Act (FOIA) and similar laws.

 What Happens for Failure to Report? 

The Proposed Rule grants CISA authority to issue subpoenas to companies compelling disclosure of information “if there is reason to believe that the entity experienced a covered cyber incident or made a ransom payment but failed to report the incident or payment …” If a company fails to comply or provides an inadequate or false response, CISA may refer the inquiry to the U.S. Department of Justice to bring a civil action or pursue acquisition penalties, suspension or debarment.

What’s Next?

CISA has until October 4, 2025, to make any modifications and publish its Final Rule. CISA expects the final rule to come into effect in early 2026. While companies will not be required to report cyber incidents or ransom payments until the Final Rule goes into effect, CISA has encouraged all companies to voluntarily share information in the interim. 

How Can Companies Prepare? 

Many companies in highly regulated industries will already have written information security programs that will need to be modified to account for this new 72-hour reporting requirement. For companies within a critical infrastructure sector that do not currently have written information security programs, including written incident response plans, devising such plans and running desktop simulations will be crucial in preparing for the implementation of the Final Rule. As CISA has indicated, companies will be expected to conduct a preliminary analysis of an incident in “hours, not days.” Thus, a company’s written response plan should be a familiar document to IT, information security, legal, and executive employees.

 

The post What to Know About CISA’s New Cyber Reporting Rules appeared first on Cybersecurity Insiders.

Cyberthreats never stay the same. Just as fast as cybersecurity providers shut down an attack vector or develop a fix for a particular form of attack, cybercriminals develop new exploits and tactics to burrow their way in. One major newer attack type is the zero-click attack, which can create a devastating impact from the smallest user action. Businesses need to ensure they’re aware of how these attacks work – and what they can do to protect themselves.

Zero-click attacks can rapidly compromise social media accounts or other systems through innocuous-looking messages. These insidious malware attacks are transmitted through DMs within social media apps and don’t require a download, click, response, or any other act from users beyond opening a message. Anyone could fall victim to them, and the business impacts could be huge.

Indeed, the official TikTok accounts of CNN and possibly Sony have recently been compromised as a result of zero-click attacks. These attacks were in turn capitalising on a zero-day weakness – a flaw in TikTok’s software that hadn’t yet been patched. When the users opened the messages, the malware launched itself and rapidly (and quietly) took control of the account. 

This is the hallmark of a zero-click attack; code is surreptitiously delivered to the target’s account or device through a call, message, or text, and that code then exploits vulnerabilities to begin extracting data or granting access. It’s not just brands at risk of petty cybercrime, either – in 2018, Jeff Bezos’s phone was compromised in a zero-click attack apparently launched via a WhatsApp message sent from the personal account of Mohammed bin Salman, the crown prince of Saudi Arabia. The bottom line? No-one is safe – not even the richest man in the world.

So how can businesses protect themselves quickly and effectively? Here are some of the key steps businesses and individuals can take to stay ahead of zero-click attack methodologies, as well as potential other new attacks used by fraudsters.  

First, it’s essential to have powerful data protection systems in place to ensure your security teams are alerted as soon as sensitive data is in danger. Hackers do not immediately obtain access to sensitive information the moment a website is compromised; there is a brief window of opportunity in which attacks can be curtailed and data can be made safe. A robust, intelligent data protection system can be the difference between a zero-click attack being mitigated, and one turning into a major data breach.

Likewise, it’s important to invest in cybersecurity tools to avoid network breaches. For example, advanced threat intelligence systems and behaviour-based analytics can proactively detect and mitigate the risks posed by highly sophisticated scammers. These forms of cybersecurity rely on the increasing availability of next-gen analytics and AI to identify suspicious behaviour and potential threats on a much more nuanced level than older systems. Threat intelligence also enables businesses to benefit from insights gathered across a wide user base, so that when a zero-click attack is detected and mitigated in one organisation, the intel gained is fed back into the industry, enabling other users to more quickly identify and stop the same type of attack.

As well as technological defences, training staff and raising awareness remain critical parts of a strong defence. Although zero-click attacks are, by definition, harder to stop through good user practice – even a perfectly vigilant employee can’t know whether a message contains malicious code before they’ve opened it – there’s still a great deal that can be done to mitigate the impact if a breach does take place. 

Good user education is the cornerstone of a good defence strategy, so businesses need to train staff on cybersecurity best practices, including through conducting security skills assessments and developing standard operating procedures to follow in the event of a suspected breach. For example, staff can be trained to pay attention to unusual difficulties logging into a social media account or to recognise tell-tale signs in messages that can point to potentially suspicious behaviour.

Finally, utilising basic security hygiene procedures can make a significant difference to the quality of your defence against emerging attack types. That might mean incorporating MFA controls as standard for all users, enforcing password changes on a regular basis, or implementing frequent vulnerability scanning and patching on devices, apps, and cloud-based systems. Covering the basics is key; for example, even if a hacker gains access to an account through a zero-click attack, MFA places limitations on the actions they can take. Likewise, regular patching ensures the number of possible vulnerabilities for zero-click attacks to exploit is kept to a minimum.

Businesses need to remain vigilant in the face of an ever-changing cybersecurity landscape. Zero-click attacks are just one example of an emergent threat that has the potential to damage companies’ reputation and finances. But with the right technologies, training, and processes in place, they can stay one step ahead of hackers.

 

The post How to defend against zero-click attacks appeared first on Cybersecurity Insiders.

The cybersecurity landscape is evolving at an unprecedented pace, driven by the rapid expansion of digital infrastructures, the adoption of cloud technologies, and the relentless advancement of threat capabilities, including new AI tools and techniques. This dynamic environment presents a dual challenge: not only must we defend against a diverse array of threats, but we must also do so faster than ever before.

The exponential speed of attacks leveraging zero-day and newly disclosed vulnerabilities demonstrates that threats have surpassed the capacity of traditional, reactive cybersecurity technologies and strategies. We must shift our focus towards more proactive, predictive, and, particularly, fully automated and AI-driven approaches to network and cyber defense.  

Cyber Attacks Keep Getting Faster

The recent ConnectWise vulnerability that was widely exploited, allowing any remote attacker to gain unauthorized access and control, exemplifies the speed and potential scale that threat actors aspire to capture with new progressions of threat methodologies. SixMap global threat intelligence observed just four days between vulnerability disclosure by the vendor and massive, global-scale exploitation in the wild. Industry research reported 3,000 vulnerable instances reachable from the Internet for this vulnerability. 

The rapid exploitation of the ConnectWise vulnerability underscores a broader issue within cybersecurity practices. The Verizon 2024 Data Breach Investigations Report highlights a critical issue in vulnerability management, showing that a significant percentage of vulnerabilities remain unremediated even after 30, 60, and 365 days. Their analysis reveals that 85% of vulnerabilities are unremediated at 30 days, 47% at 60 days, and 8% remain unremediated even after a year.

Adding to the challenge, attackers are moving faster and becoming more efficient in exploiting these vulnerabilities. According to CrowdStrike’s 2024 Global Threat Report, the average “breakout time”—the time it takes an attacker to go from initial intrusion to lateral movement—for adversaries was 62 minutes in 2023, sped up from 84 minutes in 2022.

The acceleration of attacks today highlights a critical gap in current cybersecurity practices: the lag between threat detection and response. As cyber threats evolve to exploit vulnerabilities at scale faster than ever, the window for effective response narrows dramatically. This underscores the urgent need for more efficient and proactive vulnerability management strategies that can handle both new and existing vulnerabilities effectively.

The Role of AI and Automation in Cyber Defense

“Velocity of action” emphasizes the importance of quick, decisive action to outpace opponents and deal effectively with evolving threats. This concept is important for developing cybersecurity tools and practices in the future that can meet or exceed the rapid pace at which cyber threats evolve and stave off the potentially severe consequences of delayed responses. Automation is how we achieve velocity of action. 

In the face of escalating cyber threats, integrating automation into cyber defense systems as part of a comprehensive Continuous Threat Exposure Management (CTEM) program has transitioned from a value-added feature to a core necessity. Automation empowers cybersecurity operations with speed, efficiency, and scalability—attributes crucial to addressing today’s threat landscape. These are the four areas of cyber every security leader should be looking to incorporate various levels of AI and automation:

•AI in Network Security: Artificial intelligence is reshaping network security by enhancing the functionality of automated systems. AI empowers these systems to learn from previous incidents and adapt to new threats. It excels at uncovering complex patterns and subtle anomalies that might escape detection by human analysts. It simplifies the cybersecurity workflow by taking over routine and labor-intensive tasks, significantly improving operational efficiency. 

•Automated Threat Prioritization: Automation in threat prioritization leverages AI to assess and rank threats based on their potential impact and likelihood of exploitation. By integrating threat intelligence from various sources, AI can prioritize the most critical vulnerabilities, such as those that can be leveraged for ransomware attacks, those actively exploited by known threat actors, and those with high EPSS (Exploit Prediction Scoring System) scores. This data-driven approach ensures that security teams focus their efforts on mitigating the most pressing risks.

•Automated Vulnerability Validation: Just because a vulnerability exists doesn’t mean attackers can reasonably exploit it. Automation can be used to validate that a network asset is actually exploitable in the infrastructure of a specific environment. This reduces the burden on security teams and allows them to focus on mitigating the threats that matter to their organization. 

•Automated Threat Mitigation: Organizations should deploy capabilities that give them the option, but not the obligation, to auto-fix vulnerabilities at scale. While there are risks from taking an automated remediation approach, it should be an option for an organization to weigh those risks versus the risks posed by the imminent threat of a specific cyber attack. For example, when defenders are dealing with fast-moving attacks that allow adversaries to gain root privileges in remote code execution, automated remediation should be an option to stop the attack. 

The journey towards a fully automated cyber defense framework is complex and necessitates a thorough evaluation of the operational considerations. Despite these complexities, the advantages of improved security, efficiency, and resilience make this pursuit highly valuable and worthwhile.

 

 

The post Future-proofing Cybersecurity at the Speed of Threats with Automation appeared first on Cybersecurity Insiders.

APIs are at the core of modern technology stacks, and power organizations’ digital operations. Facilitating seamless connections between customers and vital data and services, it is no surprise that API usage has, and continues to, accelerate. Given the amount of sensitive information transmitted through them, malicious actors have also taken a keen interest in APIs, devising new attack tactics to exploit them discreetly. API attacks have plagued organizations of all sizes in recent times, implicating some of the largest global brands such as Dell and T-Mobile. Attacks that have led to the theft of personable identifiable information (PII) of millions of customers.

The proliferation of generative AI (GenAI) technology also introduces another layer of complexity, enabling developers to create new APIs at scale within minutes. Organizations’ API ecosystems are growing exponentially, and security teams, as well as traditional protective solutions like API gateways and web application firewalls (WAFs) are ill equipped to keep pace with changing API dynamics. Generative AI also gives malicious actors a leg up, providing the means to launch more plausible attack campaigns in higher volumes and create entirely new AI-based attacks that can evade existing security parameters.

Our recent research report, the Salt Security State of API Security Report 2024, exposed many of the ongoing criticalchallenges that organizations face when trying to secure their API ecosystems. Most alarmingly, almost all (95%) of our survey respondents experienced security problems in production APIs within the past 12 months, with 23% suffering breaches due to API security inadequacies. This paints a clear picture –  traditional API security controls and mechanisms are no match for protecting APIs, given their complexity, varying use cases and unique behavioral attributes. In addition, the steep rise in API usage contributes to this problem, with nearly two-thirds (66%) managing more than 100 APIs. 

The research also uncovered that most API security programs remain predominantly immature, despite nearly half (46%) indicating that API security is a C-level discussion within their organization. Less than 10% of organizations have an advanced API security program, and over one-third (37%) of organizations with APIs running in production do not have an active API security strategy. While rising threat levels has forced organizations to expedite their API security efforts and adopt purpose-built solutions, an accompanying strategy is often an afterthought. This component is essential for ensuring APIs are protected across their complete lifecycle. 

A successful API security strategy starts deep and continuous discovery to find all APIs within the ecosystem.  This knowledge helps to establish a robust API security posture governance program that spans from initial design to deployment. API posture governance programs will help organizations gain complete assurance into their API landscape and acquire API asset intelligence. Intel which can then be leveraged to eliminate blind spots, and establish corporate-wide security standards and regulations across their entire API ecosystem. Posture governance provides the foundation for effective threat protection. API attacks are predominantly logic-based, so API behavioral anomaly detection is difficult and requires a substantial volume of data and cloud compute power to identify anomalous behavior accurately. 

An API posture governance program provides organizations with the necessary context and API intelligence to establish and maintain a robust security baseline. This comprehensive understanding allows security teams to proactively identify and mitigate potential risks, ensuring that APIs adhere to established standards and best practices throughout their lifecycle. By continuously monitoring and assessing API configurations, and vulnerabilities, organizations can effectively reduce their attack surface and minimize the likelihood of a security incident.. While only 10% of organizations currently have an API posture governance strategy in place, according to our research, many organizations acknowledge its importance, and nearly half (47%) plan to implement such a strategy within the next 12 months.

Protecting APIs requires organizations to take this proactive approach. While implementing purpose-built solutions that can detect malicious actors and behavioral anomalies is crucial, it must also be accompanied with ongoing posture governance initiatives that improve overall API security posture. These initiatives will prevent cyber criminals from evading an organization’s perimeter in the first instance and create stronger, more compliant API ecosystems. 

 

The post The Fundamentals to API Security Success appeared first on Cybersecurity Insiders.

The face of cyber threats has transformed dramatically over the decades. At first, they emerged as hacks, viruses and denial of service attacks, often hatched by young computer whiz kids chasing thrills and bragging rights. Then, criminal organizations leveraged increasingly sophisticated tools and techniques to steal private customer data, shut down business operations, access confidential/sensitive corporate information and launch ransomware schemes.

Today, artificial intelligence (AI) is empowering threat actors with exponentially greater levels of efficiency, volume, velocity and efficacy. A single AI tool can do the jobs of literally hundreds – if not thousands – of human hackers and spammers, with the ability to learn, process, adapt and strike with unprecedented speed and precision. What’s more, like the T-1000 android assassin in Terminator 2, AI can impersonate anyone – your friends, family, co-workers and even potential romantic partners – to develop and unleash the next generation of attacks.

This evolution of AI tools and the resulting increase in AI-generated cyberattacks has put immense pressure on organizations over the past 12 months. In light of these incidents, the FBI recently issued a warning about “the escalating threat” of AI-enabled phishing/social engineering and voice/video-cloning scams.

“These AI-driven phishing attacks are characterized by their ability to craft convincing messages tailored to specific recipients (while) containing proper grammar and spelling,” according to the FBI, “increasing the likelihood of successful deception and data theft.”

In seeking to lend further insights, we conducted a comprehensive analysis of activity from January 2023 to March 2024, to get a better sense of the evolving practices and trends associated with cybercrime and AI. As a result, we came up with the following top four forms of AI-enhanced threats:

Chatbot abuse. Underground forums have made available exposed ChatGPT login credentials, chatbots which automatically code malicious scripts and ChatGPT “jailbreaks” (the use of prompts to bypass certain boundaries or restrictions programmed into AI). However, we noticed that interest in these chatbots declined toward the end of 2023, as cybercriminals learned how to manipulate ChatGPT prompts themselves to obtain desired outcomes.

Social engineering campaigns. In exploring the possibilities of self-directed ChatGPT prompts, cybercriminals have focused intently on social engineering to trigger phishing-linked malware and business email compromise (BEC) attacks, among other exploits. AI makes it all too easy for them to conduct automatic translation, construct phishing pages, generate text for BEC schemes and create scripts for call service operators. As the FBI noted, the increasing sophistication of the technology is making it more difficult than ever to distinguish potentially harmful spam from legitimate emails.

Deepfakes. While deepfakes have been around for years, AI is taking the concept to new avenues of deception. Before, deepfakes required extensive audio, photographic and/or video to set up the ruse. That’s how celebrity deepfakes grew so common, because the internet contains an abundance of content about people in the news. AI, however, is allowing adversaries to more readily leverage content to target common individuals and companies through disinformation campaigns in the form of social media posts impersonating and compromising people and businesses.

To cite one prominent example, the “Yahoo Boys” used deepfakes to carry out pseudo romance/sextortion scams – creating fake personas and gaining the trust of victims and tricking them into sending compromising photos – and then forcing the victims to pay money to avoid having the photos released publicly. In another example, a threat actor advertised synthetic audio and video deepfake services in November 2023, claiming to be able to generate voices in any language using AI, for the purposes of producing bogus advertisements, animated profile pictures, banners and promotional videos.

Know-your-customer (KYC) verification bypasses. Organizations have used KYC verification to confirm customers’ identity, financial activities and risk level in order to prevent fraud. Criminals, of course, are always seeking to circumvent the verification process and are now deploying AI to do so. A threat actor using the name, John Wick, allegedly operated a service called OnlyFake, used “neural networks” to make realistic-looking photos of identification cards. Another going by the name, *Maroon, advertised KYC verification bypass services that supposedly can unlock all accounts requiring facial verification, such as those which direct users to upload their photos in real-time from their phone camera.

If there is a common theme we found in our analysis, it’s that AI isn’t changing the intended end game for cybercriminals – it’s just making it much easier to get there, more swiftly and successfully. The technology allows for refinements which directly lead to more sophisticated, less detectable and more convincing threats. That’s why security teams should take heed of the described developments/trends as well as the FBI warning and pursue new solutions and techniques to counter ever-increasingly formidable, AI-enabled attacks. If history has taught us anything, it’s that the first step in effectively confronting a new threat is fully understanding what it is.

 

The post The Top 4 Forms of AI-Enabled Cyber Threats appeared first on Cybersecurity Insiders.

Sizable fines imposed for data breaches in recent years indicate that regulators are increasingly determined to crack down on organizations that fail to adequately protect consumer data. Meta, for example, was fined a record $1.3 billion in 2023 for violating European Union data protection regulations. 

This regulatory pressure is also influencing consumer behavior, with nearly two in five Americans (38%) using social media less frequently due to concerns about data privacy. With this in mind, experts at Kiteworks, which unifies, tracks, controls, and secures sensitive content communications with a Private Content Network, investigated leading social media platforms to understand how they harvest personal data.

What Types of Data Does Each Social Media App Collect?

The Data Collected Across Platforms

As stated in their privacy policies, Meta, X, and TikTok all collect personally identifiable information (PII), including username, password, email, phone number, date of birth, language, location, and address book uploads. 

All three social platforms also collect payment information and usage data, which details how users interact and engage with the platforms. Meta, X, and TikTok also collect content data, including posts, messages, photos, videos, and audio data.

How is the Data Used? 

While each privacy policy outlines slightly different uses for the information they gather, the most common use case is to personalize and enhance user experience by providing customized content and ads. Additionally, all three emphasize the importance of data collection to ensure safety and security and support research. 

Meta, for example, claims to use personal data to support the research and improvement of their products, including “personalizing features, content and recommendations”. Similarly, TikTok states that collected information can be used for “research, statistical, and survey purposes.” 

As of February 9, 2024, X revoked free access to its API, which previously allowed public posts on the platform to be used freely for research purposes. This change underscores the platform’s shift towards stricter control over user data. X has, however, stated that their API can be used to “programmatically retrieve and analyze X data,” ensuring that public information remains accessible for research.

Sharing Information

Meta, X, and TikTok indicate that public posts and content are viewable by anyone, depending on users’ profile privacy settings. For users with public accounts, their information is shared with partners and third parties for services, authentication, and advertising, as well as with legal entities for compliance with laws and user protection. 

Key Differences in Data Collection

Meta collects and integrates data across multiple platforms, including Facebook, Instagram, and WhatsApp, leading to a broader range of data collection compared to X and TikTok. 

Although X and TikTok collect extensive data, their focus is more on their individual platforms, resulting in Meta having not only more data but more detailed and comprehensive data from across its platforms and user interactions. 

All platforms collect payment information, but the context for collection varies: X collects this data for ads, Meta for marketplace transactions, and TikTok for in-app purchases.

Ultimately, with the extensive amount of personal data being collected by social media platforms, it’s crucial for users to be aware of what data is being collected and how it’s being used.

Data Collection Also Poses Risks for Businesses

Businesses must also be acutely aware of social media platforms. In many instances, social media users are corporate employees who frequently post at work or about work. Posts about company events, partners, or customers, and images containing desks, computer screens, facilities or other proprietary assets put companies at potential risk of exposing sensitive information like customer data and intellectual property.

To help navigate these challenges, Patrick Spencer, spokesperson at Kiteworks has shared the best practices for employees posting on social media:

“While individual consumer behavior is important, the harvesting of social media data can also significantly impact businesses. Unauthorized or inadvertent sharing of sensitive business information on platforms known for extensive data harvesting can lead to security breaches, intellectual property theft, and reputational damage

Additionally, the exposure or unauthorized access of personally identifiable information (PII) through these platforms can expose both employees and their employers to various cyber threats. To mitigate these risks, we strongly encourage organizations to follow these recommendations:”

1.Thoroughly check privacy policies

“The most important thing you can do to protect sensitive data is to adopt a proactive approach to safeguarding digital assets and personal information. It’s pivotal to thoroughly read privacy policies before using any online service, paying attention to key sections such as data collection, usage and sharing. You need to understand what data is collected, how it is used, and who it is shared with.”

2.Avoid sharing sensitive information

“When posting on social media, do not include photos of workspaces where customer, financial, or other sensitive content may be visible on desks or computer screens. Refrain from posting images or descriptions of proprietary equipment or research without explicit permission from your employer.”

3.Use strong security practices

“Organizations should take a ‘zero-trust’ approach to protecting their business, which includes their content. In a zero-trust security approach, no user has unfettered access to all systems. A ‘content-defined zero-trust’ approach takes this model a step further, to the content layer. Organizations can protect their sensitive content when they can see where it sits in the organization, who has access to it, and what’s being done with it. 

Similarly, employees should be cautious with the permissions they grant to apps and third-party integrations. Implement strong, unique passwords for your social media accounts and enable multi-factor authentication where possible. Regularly review and revoke access for any apps that are no longer needed to minimize potential security risks.”

4.Stay informed and educated

“Provide employee training on cybersecurity and best practices for social media use. Stay updated on the latest threats and techniques used in social engineering attacks. Regularly audit and review social media activity across the company to ensure that no sensitive information has been inadvertently shared.”

“By taking these steps and educating employees about the privacy policies of the platforms they use, businesses can mitigate risk and maintain better control over their digital footprint. Protecting personal and business data is not just an individual responsibility but a collective effort that requires vigilance and continuous education.” 

The post Social media platforms that harvest the most personal data appeared first on Cybersecurity Insiders.

In today’s digital landscape, organizations face a myriad of security threats that evolve constantly. Among these threats, human risk remains one of the most significant and challenging to mitigate. Human Risk Management (HRM) is the next step for mature Security Awareness Program, HRM is an approach that focuses on understanding, managing, and reducing the risks posed by human behavior within an organization. Unlike traditional compliance training programs that often rely solely on annual computer-based training, HRM is a comprehensive strategy aimed at securing the workforce by fostering a strong security culture and changing employee behavior.

What is Human Risk Management?

Human Risk Management is a holistic approach to cybersecurity that goes beyond mere awareness. It encompasses various methods and practices designed to understand the human element in security, identify vulnerabilities, and implement strategies to mitigate risks. HRM involves continuous education, regular engagement, and behavior modification techniques to ensure that employees not only understand security policies but also embody them in their daily activities.

The Importance of Human Risk Management

1.Human Error is Inevitable: Despite advancements in technology and automated security measures, human error remains a predominant cause of security breaches. Employees may fall victim to phishing attacks, use weak passwords, or inadvertently disclose sensitive information. HRM aims to minimize these errors by instilling a culture of vigilance and accountability.

2.Dynamic Threat Landscape: Cyber threats are constantly evolving. What was a secure practice yesterday may not be sufficient today. HRM ensures that employees are regularly updated on the latest threats and best practices, making the workforce adaptable to new security challenges.

3.Building a Security Culture: A strong security culture is one where security is ingrained in the organizational ethos. HRM helps in building such a culture by promoting shared values, beliefs, and practices regarding security. This cultural shift is crucial for long-term resilience against cyber threats.

4.Beyond Compliance: While compliance with regulations and standards is essential, HRM focuses on building security into the fabric of the organization. This proactive approach not only meets compliance requirements but also enhances overall security posture.

HRM vs. Traditional Compliance Driven Programs

Traditional compliance programs often consist of periodic training sessions that employees must complete to comply with organizational policies. While these programs are necessary, they are not sufficient for mitigating human risk effectively. Here’s how HRM differs:

1.Continuous Learning and Engagement: HRM is an ongoing process that involves continuous learning and engagement. Instead of one-off training sessions, HRM includes regular workshops, phishing simulations, interactive seminars, and real-time feedback. This constant engagement helps in reinforcing good security practices and keeping security top of mind for employees.

2.Behavioral Change: The core of HRM is behavioral change. It uses psychological principles to understand why employees might engage in risky behaviors and employs strategies to modify those behaviors. Techniques such as positive reinforcement, gamification, and peer influence are used to encourage secure behavior.

3.Role-Based Training: HRM recognizes that one size does not fit all. Different employees have different roles, responsibilities, and levels of access to sensitive information. HRM tailors role-based security training and communication to address the specific needs and risks associated with each role, making the training more relevant and effective.

4.Metrics and Analytics: Effective HRM involves measuring the impact of training and engagement activities. Metrics such as phishing test results, incident reports, and employee feedback are analyzed to assess the effectiveness of the HRM program and identify areas for improvement.

Driving a Strong Security Culture

A strong security culture is the ultimate goal of Human Risk Management. This culture is characterized by:

1.Leadership Involvement: Senior leadership must champion the cause of security, setting the tone for the entire organization. Their involvement demonstrates the importance of security and encourages employees to take it seriously.

2.Open Communication: Encouraging open communication about security issues helps in creating a supportive environment where employees feel comfortable reporting suspicious activities without fear of retribution.

3.Empowerment: Empowering employees with the knowledge and tools they need to protect themselves and the organization is key. This includes not only technical training but also fostering a sense of ownership and responsibility for security.

4.Recognition and Rewards: Recognizing and rewarding employees who demonstrate good security practices can motivate others to follow suit. This positive reinforcement helps in embedding security into the organizational culture.

Conclusion

Human Risk Management is a critical component of an organization’s overall cybersecurity strategy. By going beyond just annual training and focusing on continuous engagement, behavioral change, and building a strong security culture, HRM effectively reduces the risks posed by human behavior. For senior leadership, investing in HRM is investing in the long-term security and resilience of the organization. It is about creating an environment where every employee understands their role in protecting the organization and is committed to maintaining a secure workplace.

Learn more about HRM and securing your workforce in the three-day SANS LDR433 Managing Human Risk course.

 

The post Human Risk Management: The Next Step in Mature Security Awareness Programs appeared first on Cybersecurity Insiders.

The evolving technological landscape has been transformative across most industries, but it’s arguably in the world of finance where the largest strides have been taken. Digital calculators and qualifier tools have made it quick and easy for customers to apply for mortgages and substantial loans in a matter of minutes. Elsewhere, the continued shift towards online shopping in favor of brick and mortar stores means more money is changing hands over the internet than ever before. 

The net result of this is a heightened focus being placed on online cyber security by banks and other types of financial lenders. To no great shock, businesses in this sector are most susceptible to large monetary attacks. In 2023 alone, losses per instance of cybercrime totalled a staggering $5.9 million for financial institutions. 

With it more pivotal than ever to ensure these organizations are doing what they can to stay safe, an increasing number are taking note of what can be done to alleviate the threat of online criminal activity. In this short guide, we’ll discuss some of the best policies to implement. From educating those who you work most closely with, to rethinking how you react to crime, here are four of the best approaches to take. 

Targeting vulnerabilities in the supply chain

Cyber criminals often choose to avoid targeting financial institutions directly, owing to the increasing amount of effort these enterprises are taking to protect themselves. As a result, they’ll look for weaknesses within a supply chain to exploit – usually in the form of a vendor or their software provider.

This is something which needs to be factored into any partnership with a third-party vendor or store. Financial businesses should evaluate the security structure of any of these websites, asking for clear guidance on exactly what measures are being taken to keep financial information safe. Adopting a “Zero Trust” network architecture, where every attempt to access your network is treated as a breach until proven otherwise, is another viable step.

Utilizing strong cyber security software 

Criminals most commonly target their victims’ confidential or private information. This type of attack accounts for 64% of all cyber crimes carried out against financial institutions. The solution here is to guarantee that all software and online firewalls being utilized are as up-to-date and comprehensive as possible. 

This extends beyond just the installation and use of a trusted cyber security software. Measures which financial lenders can take to keep data and other sensitive information safe include: 

  • Securing all components of a network to ensure only approved users are allowed access
  • Following a strict schedule for patching any software issues
  • Regularly reviewing and deleting any unnecessary user accounts 
  • Segmenting critical network components and services 
  • Checking how comprehensive the system is with regular vulnerability scans 

It’s these preventative measures which greatly reduce the chance of falling victim to an attack. 

Educating employees 

Your staff are the beating heart of your organization. Unfortunately, they’re also responsible for a large proportion of financial crimes. It’s estimated that as many as 90% of cyber crimes are made possible because of human error. 

The easiest solution here is to make sure employees are being educated properly through regular security awareness training. This should involve providing clear examples of modern tricks criminals are using, as well as highlighting a detailed breakdown of common scams like baiting, phishing, whaling and scareware.

Having a robust recovery strategy 

While in an ideal world this step wouldn’t ever be necessary, sometimes cyber crime is unavoidable. The best way to counter being a victim is having a strong policy in place to help you immediately deal with and recover from an attack. The quicker this is enacted, the better. 

Techniques to adopt here could be to: 

  • Ensure all incidences and post-attack workflows are clearly documented and accessible
  • Carry out regular cyber recovery exercises, audits, and penetration testing
  • Maintain a good working relationship with federal and local law enforcement agencies to make communication seamless
  • Think about having a cyber insurance policy to help with the immediate financial aftermath 

By knowing how best to react to a breach, a financial lender can mitigate a lot of the more severe resultant issues.

While cyber crime isn’t going to cease any time soon, the combative approach which financial organizations are taking to dampen its impact are helping to keep trillions of dollars safe every year. Make sure to use this guide as your starting point when thinking about your own cyber threat prevention strategy. 

 

The post How do financial lenders avoid cyber threats? appeared first on Cybersecurity Insiders.

The history of artificial intelligence (AI) is a fascinating journey of innovation and discovery that spans decades. From the early days of simple machine learning algorithms to today’s advanced neural networks, AI has become an integral part of our daily lives. AI Appreciation Day celebrated on July 16th, is a testament to this remarkable progress and a day to acknowledge the contributions of AI to society.

As we look back on the milestones of AI, we see a timeline marked by significant breakthroughs that have pushed the boundaries of what machines can do. The development of generative AI, such as ChatGPT, Bing Chat, and Google’s Bard, alongside image creators like DALLE-2 and Midjourney, has brought AI into the spotlight, showcasing its ability to enhance human creativity and decision-making across various sectors. 

AI Appreciation Day not only celebrates these advancements but also encourages reflection on AI’s ethical and security implications. It’s a day to consider how we can continue harnessing AI’s benefits while ensuring its responsible use. As we transition into the AI Age, it’s crucial to maintain a balance between innovation and the protection of our values and privacy. 

The following expert commentary will delve deeper into these themes, offering insights from leaders in the field who have witnessed AI’s evolution firsthand. Their perspectives will shed light on the current state of AI, its potential for the future, and the challenges we must address to ensure its beneficial integration into society. 

Aviral Verma, Lead Threat Intelligence Analyst, Securin 

“We are on course towards Artificial General Intelligence, or AGI, where AI goes beyond imitation and can exhibit human-like cognitive abilities and reasoning. AI that can grasp the nuances of language, context and even emotions. I understand the side of caution, the fear of AI replacing humans. But I envision this evolution to enhance human-AI symbiotic relationships, where its true potential lies in complementing our strengths and weaknesses. Humanity is a race of creators, inventors, thinkers, and tinkerers; AGI can help us be even better at all those things and act as a powerful amplifier for human ingenuity. 

To promote safety for all users and responsible AI deployment, developers must uphold Choice, Fairness, and Transparency as three critical design pillars: 

• Choice: It’s essential that individuals have meaningful choices regarding how AI systems interact with them and affect their lives. This includes the right to opt-in or opt-out of AI-driven services, control over data collection and usage and clear explanations of how AI decisions impact them. Developers should prioritize designing AI systems that respect and empower user autonomy. 

• Fairness: AI systems must be developed and deployed in ways that ensure fairness and mitigate biases. This involves addressing biases in training data, algorithms and decision-making processes to prevent discrimination based on race, gender, age or other sensitive attributes. Fairness also encompasses designing AI systems that promote equal opportunities and outcomes for all individuals, regardless of background or circumstances. 

• Transparency: Transparency is crucial for building trust in AI systems. Developers should strive to make AI systems understandable and explainable to users, stakeholders and regulators. This includes providing clear explanations of how AI decisions are made, disclosing limitations and potential biases, and ensuring transparency in data collection, usage and sharing practices. Transparent AI systems enable scrutiny, accountability and informed decision-making by all parties involved.

The tech industry is on the edge of something truly exciting, and I am optimistic about the advancements individuals and organizations can achieve with AI. To build confidence in AI, we should focus more on Explainable AI (X-AI). By clarifying AI’s decision-making processes, X-AI can alleviate the natural skepticism people have about the “black box” nature of AI. This transparency not only builds trust but also lays a solid foundation for future advancements. With X-AI, we can move beyond the limitations of a “black box” approach and foster informed, collaborative progress for all parties involved.” 

Anthony Cammarano, CTO & VP of Engineering, Protegrity 

“On this AI Appreciation Day, we reflect on AI’s remarkable journey to an everyday consumer reality. As stewards of data security, we recognize AI’s transformative impact on our lives. We celebrate AI’s advancements and democratization, bringing powerful tools into the hands of many. Yet, as we embrace these changes, we remain vigilant about the security of the data that powers AI.

Vigilance takes understanding the nuances of data protection in an AI-driven world. It takes a commitment to securing data as it traverses the complex pipelines of AI models, ensuring that users can trust the integrity and confidentiality of their most sensitive information. Today, we appreciate AI for its potential and challenges, and we renew our commitment to innovating data security strategies that keep pace with AI’s rapid evolution.

As we look to the future, we see AI not as a distant concept but as a present reality that requires immediate attention and respect. We understand that with this great power comes great responsibility, and we are poised to meet the challenges head-on, ensuring that our data—and, by extension, our AI—is as secure as it is powerful. Let’s continue to appreciate and advance AI, but let’s do so with the foresight and security to make its benefits lasting and its risks manageable.” 

Kathryn Grayson Nanz, Senior Developer Advocate, Progress 

This AI Appreciation Day, I would encourage developers to think about trust and purposefulness. Because when we use AI technology without intention, we can actually do more harm than good. It’s incredibly exciting to see Gen AI develop so quickly and make incredible leaps forward. But it’s also a responsibility to build safely with a fast-moving technology. 

It’s easier than ever before to take advantage of AI to enhance our websites and applications, but part of doing so responsibly is being aware of the inherent risk – and doing whatever we can to mitigate it. Keep an eye on legal updates, and be ready to potentially make changes in order to comply with new regulations. Build trust with your users by sharing information freely and removing the “black box” feeling as much as possible. Make sure you’re listening to what users want and implementing AI features that enhance – rather than diminish – their experience. And establish checkpoints and reviews to ensure the human touch hasn’t been removed from the equation, entirely. 

Arti Raman (She/Her), CEO and founder, Portal26 

“Generative artificial intelligence (GenAI) offers employees and the C-suite a new arsenal of tools for productivity compared to the unreliable AI we’ve known for the past couple of decades, but as we celebrate these advancements this AI Appreciation Day, it’s less clear how organizations plan to make their AI strategies stick. They are still throwing darts into the dark, hoping to land on the perfect implementation strategy.

For those looking to make AI work for them and mitigate the risks: 

1. The technology to address burning security questions regarding GenAI has only been around for approximately six months. Many companies have fallen victim to the negative consequences of GenAI and its misuse. Now is the time to ask, ‘How can I have visibility into these large language models (LLMs?).’ 

2. The long-term ability to audit and have forensics capabilities across internal networks will be crucial for organizations wanting to ensure their AI strategies work for them, not against them.  

3. These core capabilities will ultimately drive employee education and knowing how AI tools are best utilized internally. You can’t manage what you can’t see or teach what you don’t know. Having the ability to see, collect and analyze how employees use AI, where they’re most using it and what they’re using is invaluable for long-term strategy.  

AI has marked a turning point globally, and we’re only at the beginning. As this technology evolves, so must our approach to ensuring its ethical and responsible usage.” 

Roger Brulotte, CEO, Leaseweb Canada 

“In an age where “data readiness” is crucial for organizations; the rapid adoption of AI and ML highlights the need of cloud computing services. Canada stands as a pioneer in this technological wave, with its industries using AI to drive economic growth. Montreal is quickly establishing itself as an AI hub with organizations like Scale AI and Mila – Quebec Artificial Intelligence Institute. 

Companies working with AI models need to manage extensive data sets, requiring robust and flexible solutions to manage complex tasks, training large datasets and neural network navigation. While the fundamental architecture of AI may remain constant, scaling the components up and down is essential depending on the model’s state. As the data-driven landscape keeps evolving, organizations must select data and hosting providers who can keep up with the times and adjust as needed, especially as Canada implements its spending plan to bolster AI on a national level. 

On AI Appreciation Day, we recognize that superior AI outcomes are powered by data, which is only as effective as the solutions that enable its use and safeguarding.” 

Steve Wilson, CPO, Exabeam 

“My recognition of AI Appreciation Day is part celebration, part warning for fellow AI enthusiasts in the security sector. We’ve seen AI capabilities progress dramatically, from simple bots playing chess, to self-driving cars and AI-powered drones with a spectacular potential to truly change how we live. While exciting, AI innovation is often unpredictable. Tempering our enthusiasm is the sobering reality that rapid progress — while filled with opportunity — is never without its challenges.  

The fascinating evolution of AI and self-learning algorithms have presented very different obstacles for teams in the security operations center (SOC), to combat adversaries. Freely available AI tools are assisting threat actors in creating synthetic identity-based attacks using fraudulent images, videos, audio, and more. This deception can be indistinguishable to humans — and exponentially raise the success rate for phishing and social engineering tactics. To defend, security teams should also be armed with AI-driven solutions for predictive analytics, advanced threat detection, investigation and response (TDIR), and exceptional improvements to workflow. 

Before jumping headlong into the excitement and potential of AI, it’s our responsibility to evaluate the societal impacts. We must address ethical concerns and build robust security frameworks. AI is already revolutionizing industries, creating efficiencies and opening possibilities we never could have imagined just a few, short years ago. We’re only getting started and by proceeding with cautious optimism, we can remain vigilant to the obvious risks and avoid negative consequences, while still appreciating AI’s many benefits.” 

 Anthony Verna, SVP and GM, Cubic DTECH Mission Solutions  

“In the ever-evolving landscape of modern warfare, the role artificial intelligence (AI) places in dictating the trajectory of military operations must be emphasized. As we continue to see the complexities of an AI-accelerated battlespace intensify, AI combined with Machine Learning (ML) and advanced data processing have become indispensable to ensure the success of critical missions. 

It’s also essential to recognize how vital next-generation tactical edge-based technologies are in providing decision advantage and how AI’s integration at the edge marks substantial advancement in military operations. The capability to process and interpret data instantaneously at the point of collection offers commanders prompt, actionable insights, facilitating rapid and well-informed decisions. 

Modern operations demand immediate and precise data-to-decision capabilities to support mission-critical decisions at the swift pace of conflict today. This edge-based approach is crucial in denied, disrupted, intermittent, and limited (DDIL) environments where traditional communication channels may be compromised or unreliable.  

As we celebrate AI Appreciation Day, let us acknowledge AI’s profound impact on our military capabilities, ensuring our forces are equipped with the most advanced technology to face the challenges of modern warfare and maintain a strategic advantage.” 

Dave Hoekstra, Product Evangelist, Calabrio  

AI Appreciation Day is a day to honor the past and present accomplishments of artificial intelligence (AI). AI is not a novel creation, but a product of decades of inquiry and invention. It improves our lives and efficiency, by allowing us to interact and obtain information quicker and easier than ever.  Recent AI breakthroughs have opened up exciting opportunities in education and innovation, providing powerful tools to analyze data and act on insights like never before.  

From early chatbots to advanced voicebots, contact center customers have interacted with AI technology. But the latest innovations in AI helps companies make sense of the data customers provide, like reviews, surveys or calls. Modern models can offer human and virtual agents ongoing feedback on customer interactions to improve them. Workstation copilots can also work with agents and help them find answers. While a helpful human touch will always be required in the contact center, these AI enhancements are becoming more and more essential for agents to perform their jobs effectively and to create a positive customer experience.   

While the contact center is poised for significant improvements with AI, there are still important questions remaining: How do we make sure AI tools are impartial, transparent and accountable? How do we maintain a human-focused and cooperative method of customer service? These are some of the challenges we are addressing as we work towards a more advanced, AI-driven future in the contact center. 

Cris Grossmann, CEO and founder, Beekeeper  

Each year, AI Appreciation Day serves as a reminder to embrace the transformative and powerful potential AI holds for frontline industries. The adoption of AI-powered tools by frontline businesses can provide managerial visibility, which is crucial for a more connected frontline workforce. Automated features like real-time evaluation of employee sentiment allow companies to proactively address concerns and prevent employee burnout. Utilizing AI to gauge employee sentiment not only improves retention and engagement but also unlocks new levels of operational efficiency that traditional methods cannot achieve.   

No matter how many advancements in technology we make, AI will never be able to replace frontline workers. But it does have the power to enhance the experience of both frontline workers and managers through smart, people-first strategies.  

 

The post AI and Ethics: Expert Insights on the Future of Intelligent Technology appeared first on Cybersecurity Insiders.