Security researchers have uncovered a new flaw in some AI chatbots that could have allowed hackers to steal personal information from users. The flaw, which has been named "Imprompter", which uses a clever trick to hide malicious instructions within seemingly-random text. Read more in my article on the Hot for Security blog.
In episode 21 of "The AI Fix"", Mark and Graham comfort themselves with a limbless AI pet as they learn about a terrifying robot dog with a flamethrower, fission-powered data centres, AI suicide pods, and a multi-limbed robot with a passion for classical music. Graham finds out what happens if you sellotape an Alexa to a Chihuahua, and Mark asks AI Trump and AI Harris how many Rs there are in "strawberry". All this and much more is discussed in the latest edition of “The AI Fix” podcast by Graham Cluley and Mark Stockley.

AI-based chatbots are increasingly becoming integral to our daily lives, with services like Gemini on Android, Copilot in Microsoft Edge, and OpenAI’s ChatGPT being widely utilized by users seeking to fulfill various online needs.

However, a concerning issue has emerged from research conducted at the University of Texas at Austin’s SPARK Lab. Security experts there have identified a troubling trend: certain AI platforms are falling prey to data poisoning attacks, which manipulate search results—a phenomenon technically referred to as “ConfusedPilot.”

Led by Professor Mohit Tiwari, who is also the CEO of Symmetry Systems, the research team discovered that attackers are primarily targeting Retrieval Augmented Generation (RAG) systems. These systems serve as essential reference points for machine learning tools, helping them provide relevant responses to chatbot users.

The implications of such manipulations are significant. They can lead to the spread of misinformation, severely impacting decision-making processes within organizations across various sectors. This poses a substantial risk, especially as many Fortune 500 companies express keen interest in adopting RAG systems for purposes such as automated threat detection, customer support, and ticket generation.

Consider the scenario of a customer care system compromised by data poisoning, whether from insider threats or external attackers. The fallout could be dire: false information disseminated to customers could not only mislead them but also foster distrust, ultimately damaging the business’s reputation and revenue. A recent incident in Canada illustrates this danger. A rival company poisoned the automated responses of a real estate firm, significantly undermining its monthly targets by diverting leads to the competitor. Fortunately, the business owner identified the issue in time and was able to rectify the situation before it escalated further.

To those involved in developing AI platforms—whether you are in the early stages or have already launched your system—it’s crucial to prioritize security. Implementing robust measures is essential to safeguard against data poisoning attacks. This includes establishing stringent data access controls, conducting regular audits, ensuring human oversight, and utilizing data segmentation techniques. Taking these steps can help create more resilient AI systems, ultimately protecting against potential threats and ensuring reliable service delivery.

The post Data Poisoning threatens AI platforms raising misinformation concerns appeared first on Cybersecurity Insiders.

The government of Malta has introduced an innovative service to replace human-interfaced customer care with one powered by AI chatbots. The nation, located on the North African coast, resorted to this move to put an end to the dispensing of favors or jobs by ministers in exchange for materialistic bribes.

Generally, when someone calls the Malta government’s customer care, they start interacting with human employees who then connect them to related ministries.

After receiving numerous complaints from voters that the civil service top brass was acting on orders from ministers to dispense favors such as job requests, the government began searching for an alternative.

After exploring various options, the Malta government decided to replace human-interfaced customer care with one powered by the artificial intelligence-propelled chatbot Lumi.Chat40k. This digital interface will now act as a bridge between the ministries and the citizens.

Lumi Chat will assist the public with requests related to job sanctions, meetings with ministers, and complaints about civic issues and unfulfilled electoral promises.

Sources report that the conversational chatbot was programmed with data gathered from the 59 years of Malta’s governance and the prevailing laws. Thus, it can prove helpful and resourceful in making governance more efficient, corruption-free, and public-friendly.

NOTE: All chatbots, applications, and machine learning tools are eventually controlled by human minds, and such digital governance cannot put an end to the worldwide spread of corruption. Unless we humans change and do our work sincerely as per the expectations of the public, nothing can change in Malta or across the globe.

 

The post AI Chatbot customer care replacing Ministers and their favors appeared first on Cybersecurity Insiders.

Could a senior Latvian politician really be responsible for scamming hundreds of "mothers-of-two" in the UK? (Probably not, despite Graham's theories...) And should we be getting worried about the AI wonder that is ChatGPT? All this and more is discussed in the latest edition of the "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault.
An AI chatbot is causing a stir - both impressing and terrifying users in equal measure. A security researcher discovers that a "smart" cam that doesn't use the internet is err.. using the internet. And university students revolt over under-the-belt surveillance. All this and much more is discussed in the latest edition of the award-winning "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Host Unknown's Thom Langford.