OpenAI-developed ChatGPT has hit the news headlines because user information has been leaked on the web by some threat actors who claim to have accessed and stolen data from the database of the OpenAI platform via a bug vulnerability.

As a result, credit card details, the last 4 digits of credit card numbers, credit card expiration dates, first and last names, and emails of those using the conversational AI are available for others to see. OpenAI acknowledged the issue as true in a statement released on Sunday and announced that information related to 1.2% of its overall subscribers was accessible and probably stolen by hackers.

Founded in 2015 by a group of angel investors led by Sam Altman, who is now the OpenAI CEO, ChatGPT is a large-scale language model trained with massive amounts of data to generate conversational responses. Now part of OpenAI, the AI-based model is funded by Microsoft for further development.

The platform currently acts as a centralized data generation trove capable of generating poems, articles, texts, and other written works related to academics.

Last Monday, a tech enthusiast who is a Twitter user warned ChatGPT users about leaking personal information from the platform due to a bug in the Redis Client Open Source library, redis-py. The bug also leaked data related to users who were using the search platform to generate content on various topics, and the content might have reached the desks of hackers, which is only a possibility.

NOTE: For the past two weeks, certain media resources related to the educational field have warned that the Artificial Intelligence-based company might encourage students to cheat in exams or essays, which is a threat to the integrity of academia.

 

The post ChatGPT users data leaked because of bug vulnerability appeared first on Cybersecurity Insiders.

After the release of ChatGPT in November 2022, the OpenAI CEO and the people behind the conversational chatbot launch say that they are equally scared of the negative consequences that the newly developed technology can fetch in the future.

Sam Altman, the tech brain leading the company, now owned by Microsoft, spoke a few words about what the world was intending to say about the tech.

Responding to a query expressed by a participant at the release of new AI tech GPT-4, Altman, 37, stressed that he, his team, and all the tech enthusiasts should take the pledge to protect the world in such a way that ChatGPT doesn’t negate the consequences for humanity.

Especially, the leader is worried that the two newly released AI-based technologies, ChatGPT and GPT-4, can play havoc by acting as large-scale sources to spread disinformation or fake news.

What if the human capable of generating a computer code could use the innovation to launch cyber attacks?…..said Altman.

Despite fears, the invention has the full potential to become a highly assisting technology for humanity, developed to date.

Forget all the innovations developed to date, like radio, TV, smartphone, computer, and internet; as we can gain a slew of benefits from the latest AI model GPT-4, as it can become the fastest growing consumer application in history, added Sam Altman.

NOTE- The insights were provided by the OpenAI CEO, after Elon Musk, the Tesla chief, denounced the move of Microsoft to disband the team that was over-viewing the regulatory ethics of AI. In one of his tweets posted in December last year, Musk proclaimed himself as the sole technology leader to call for AI safety regulation for over a decade.

 

The post We are scared of Artificial Intelligence says OpenAI CEO appeared first on Cybersecurity Insiders.

ChatGPT, the sensational conversational app of Microsoft, has been identified as a threat to national security due to its increased sophistication in phishing scams. The Silicon Valley sophisticated sensation developed by OpenAI has become a part of every tech discussion on LinkedIn and Redditt these days. People believe that it assists threat actors in launching cyber-attacks.

Security firm Darktrace has found that ChatGPT has the potential to assist cyber criminals in launching phishing emails, enabling adversaries to track down more targets and personalize the attacks to such a level that they yield sure shot results.

Therefore, is the newly developed technology a bane to us? Well, it depends on the mind that is using it. ChatGPT is a beautifully carved conversational bot that has tons of information loaded onto it and is capable of answering anything and everything.

However, if the human mind desires to used to launch sophistication driven cyber-attacks, then what is wrong with the software then?

For instance, a group of scientists belonging to the department of Virology were researching about an influenza virus in a Chinese lab of Wuhan. The virus somehow leaked from the research facility and it spread into a pandemic called COVID 19, claiming millions of lives worldwide.

Here the issue is not with the development of virus, as in practical it was done to create a medicine against it. But some human mind, whether wontedly or innocently, leaked it to the world and today we have come to a stage where we are discussing our lives before lockdown and after lock down.

So, Darktrace has issued a statement that ChatGPT can be used as a catalyst to launch phishing scams. But at the same time, it has the potential to simplify our lives to a great extent……isn’t it?

NOTE- From March last week, Apple Inc will be introducing to its users a new app called WatchGPT, that brings in all the power of the conversational AI app to the wrist. It will be available in the Apple App Store from 21st of this month and will eliminate the need for users to open the browser, type the URL and then take advantage of the service.

 

The post Britain Cybersecurity firm issues warning against Microsoft ChatGPT appeared first on Cybersecurity Insiders.

This is a good survey on prompt injection attacks on large language models (like ChatGPT).

Abstract: We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting. Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following. So far, these attacks assumed that the adversary is directly prompting the LLM.

In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viability of our attacks, we implemented specific demonstrations of the proposed attacks within synthetic applications. In summary, our work calls for an urgent evaluation of current mitigation techniques and an investigation of whether new techniques are needed to defend LLMs against these threats.

Any priced item in the world, mostly electronics, gets duplicated in China and is thereafter sold as a cost-effective product. Meaning, those who cannot afford a branded good can get the Chinese product for half or quarter of the price.

The same is also possible in the software world as almost 10 Chinese companies are competing to release a duplicate version of conversational software aka ChatGPT, owned and developed by OpenAI, a publicly funded company of Windows OS giant Microsoft.

Baidu, Inc. is the first to bring out its own version of chat-based AI bot that can give an answer to anything or everything. They say that ‘Ernie Bot’ will release in September this year and Robin Li, the CEO, states that it will be a revolutionary version of what the ChatGPT can produce now.

Similarly, Alibaba, Tencent, a team of AI researchers from Fudan University, is coming up with a similar chat-based AI bot. JD.com will announce ChatJD and Fudan University researchers are planning to bring their artificial intelligence propelled by conversational assistant named MOSS by November this year.

China Telecom, NetEase, Kuaishou Technology, Inspur Electronics, and Kunlun are also working on the model of a chat-based ai that is on par with Microsoft’s AI ChatGPT. But are holding the details tight for various reasons.

Well, after New York Times reporter Kevin Roose revealed some astonishing details about the new chat based conversational AI, not all are in favor of the newly innovated tech.

JP Morgan Chase has announced to its employees to stay away from the newly developed data tech as it feels more threatened by the concerns than its benefits.

US Military has been asked not to access the software-based service as it can gather classical information that the enemy or outsiders might seem interested..

NOTE 1- In practical, it is not the technology that needs to be blamed. But it’s the human mind that uses it should take the blame, as they can use it for or against the humankind. What’s you say?

NOTE 2-According to a survey that was published in PloS One Journal, about 39% of household chores such as housecleaning, cooking, taking care of children or the elderly can be automated. And within the next 10 years, most of these activities will be done by robots either hired or bought sole for the purpose. Seems like a character in the famous sitcom “Small Wonder” of late 80s where a small girl android helps a family in their day to day works with intelligence, comedy and some material to think of the Future!

 

The post China working on Microsoft OpenAI ChatGPT appeared first on Cybersecurity Insiders.

Microsoft has made it official that it is going to introduce the services of its AI ChatGPT on all its premium upcoming mobile phones. Therefore, by June this year, the Bing Chatbot will be offered as Bing Smartphone app and a support system for its edge browser, thus competing with Google in terms of AI propelled search results.

However, all doesn’t seem to go great for usage of artificial intelligence, as internationally renowned JPMorgan Chase has asked its employees to stop accessing the services of the ChatGPT.

Reasons are yet to be officially disclosed to the public. But persons familiar with the matter report that the decision was taken by the firm to stop projecting the minds of its employees by third party software.

Well, this happens to the first company to impose an apparent ban on the usage of the conversational AI software and, after weighing down the pros with cons, more companies will surely follow.

In another development to the newly developed software of OpenAI, some companies are thinking of integrating the service into their game management platforms. One such company is Texas Hold’em, where the odds of the poker game will be predicted to the users in a predefined way, provided they pay a premium.

WhatsApp is also working to integrate the ChatGPT AI software into its platform, allowing users to reply to chats on a user’s behalf.

Microsoft chief Satya Nadella claims that his company’s $3 billion investment in OpenAI will provide great returns in the future.

 

The post Microsoft ChatGPT usage virtually banned by JPMorgan Chase appeared first on Cybersecurity Insiders.

IBM Chief felt ChatGPT, an OpenAI developed a platform of Microsoft, has the potential to replace white-collar jobs such as insurance consultants, lawyers, accountants, computer programmers and admin roles.

Arvind Krishna, the lead of the technology at IBM, predicts that some sort of jobs will replace by AI models and so job steal is predictably possible. However, the chairperson and CEO of the tech giant felt the estimates might turn topsy-turvy, if those handling and developing the tech handle it in a more matured way.

Meaning, AI will play a role of a bridge between humans and machines and surely provide optimal results that weren’t witnessed all these days or from the beginning of the tech usage in late 90s.

Now coming to the big question, who developed ChatGPT and the answer is as follows-

Mira Murati, the 38-year-old CTO of OpenAI, the firm that owns AI powered ChatGPT. She has a bachelor of engineering degree taken from Thayer School of Engineering in Dartmouth and, after pursuing several jobs in other tech companies, joined as a VP in Open AI in 2018.

Her chat bot has hit several headlines in many countries across the globe and is mostly talked for its misuse by threat actors.

Therefore, it is clear. The tech has no flaw in it and it is the human mind that needs to be put to fault, say experts.

 

The post Microsoft ChatGPT has the potential to replace white-collar jobs says IBM Chief appeared first on Cybersecurity Insiders.