To all those who are eagerly searching for ChatGPT login and Midjourney web pages, here’s an alert that needs your immediate attention. A threat actor named BatLoader has started a campaign of hosting fake ChatGPT and Midjourney webpages on Google ads.

So, next time when you search for the said web portals, be sure of what you’re clicking on and do not input your login credentials blindly! Security researchers from the eSentire Threat Response Unit discovered the campaign and confirmed that Batloader was distributing malware into devices with Redline Stealer.

Thus, as cyber criminals are impersonating renowned applications to spread malware, this threat seems to be serious.

NOTE 1: ChatGPT is an AI application developed by OpenAI, now owned by Microsoft. It is a chat-based conversational chatbot that can answer anything and everything… pun intended!

NOTE 2: Midjourney is, again, an AI program that has the potential to generate images from natural language descriptions called Prompts. It is like sketching out a person’s image by visually analyzing the words of another person. Google, as usual, is all set to release a competitor that can also create images based on descriptions. The project is already in its final stage and will be available as a beta version from September this year. The best part of this algorithm is that it receives a refreshment every 3-4 months, with the first release in February 2022 and the last version, V5.1, on May 5th, 2023. Currently, the service is accessible only through Discord Servers and is used to create prototypes at a fast pace. Stirring up controversy in March 2023, the company openly disclosed that it has blocked the generation of Chinese leader Xi Jinping and North Korean Kim Jong Un, as they were being used to create memes and satirical posts.

The post Beware of ChatGPT and Midjourney imposters appeared first on Cybersecurity Insiders.

OpenAI CEO Sam Altman has expressed his concerns to the Senate that the use of AI without any limitations is a big cause for concern regarding the integrity of election processing to be held in November 2024.

ChatGPT is turning into a significant area of concern as it evolves, said Sam in a briefing to the congressional committee inquiring about the boundless use of technology in related fields.

Altman’s company was the developer of ChatGPT, which is now owned and developed by Microsoft to power its Bing search engine. However, things seem to be going out of control as the technology is being used to cause societal harm and spread misinformation, posing a serious threat with the potential to end humanity.

Adding fuel to the fire are fears that the artificial intelligence technology could become self-aware soon, as GPT-4 has started displaying human-like reasoning in its latest assessment trial.

For instance, the machine learning tool was given a task on how to arrange eggs, books, a laptop, and a nail in a stable manner. The prompt was given to Bing’s chat feature, and the system proved excellent in its communication by offering tips on arranging items, such as eggs without breaking them, stacking books, placing a laptop and a bottle on a table, and positioning a nail securely.

The work was conducted in a professional manner, leading researchers from the Windows OS-producing giant to claim that it could become uncontrollable for humans—a phase named ‘Singularity’ predicted to occur by the year 2045.

So, will AI technology do more harm than good in the future?

Well, not necessarily, as it all depends on the minds of humans who are dealing with it or integrating it into their daily lives. If they use it for a good cause, it can bring them double benefits, like its current use in healthcare. However, if they use it to cause harm, it could wipe out humanity forever, as depicted in movies like Terminator and the Indian movie Robo, where the bad guys manipulate the robot to devastate the world by implanting evil commands into its chip.

The post OpenAI CEO concerned that ChatGPT could compromise US elections 2024 appeared first on Cybersecurity Insiders.

Microsoft is now an undoubted owner of the AI conversational tool ChatGPT developed by OpenAI. It was released in November last year and since then has faced backlash from a small sect of technology enthusiasts regarding privacy concerns.

The Windows software producing giant has announced that it will be releasing a new version of the Chatbot ChatGPT in a few weeks that will address all the prevailing concerns regarding privacy.

Readers should note the fact that the announcement came just when a few of its corporate users like Samsung and countries like Italy and Germany issued a ban on the use of the machine learning-powered tool due to a fear of data leaks to third parties and state-funded hackers.

However, the announcement also hints at the fact that the new version of the chat assistant could cost more than what is currently being charged for ChatGPT Plus users. Meaning, users need to shell out more if they are really concerned about their information privacy… well, such developments do take place when a service provider tries to monopolize the technology.

Sam Altman, the CEO of OpenAI, has already cleared the air that its latest GPT-4 version has stopped using customer data to train its AI tools. This gives us assurance that the data generated on OpenAI server platforms is in safe hands… hmm, at least for now!

NOTE: Technically speaking, these are all private companies, and we do not know what exactly happens behind the doors of the data farms owned by companies like Microsoft, Google, AWS, and such. So, the only way to deal with the situation is to act wisely, share only the details that are needed, and avoid personal data spills even if the situation demands.

The post Microsoft new ChatGPT to address all privacy concerns appeared first on Cybersecurity Insiders.

There’s good reason to fear that AI systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.

These risks may be the fallout of a world where businesses deploy poorly tested AI systems in a battle for market share, each hoping to establish a monopoly.

But dystopia isn’t the only possible future. AI could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an AI not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it.

An AI built for public benefit could be tailor-made for those use cases where technology can best help democracy. It could plausibly educate citizens, help them deliberate together, summarize what they think, and find possible common ground. Politicians might use large language models, or LLMs, like GPT4 to better understand what their citizens want.

Today, state-of-the-art AI systems are controlled by multibillion-dollar tech companies: Google, Meta, and OpenAI in connection with Microsoft. These companies get to decide how we engage with their AIs and what sort of access we have. They can steer and shape those AIs to conform to their corporate interests. That isn’t the world we want. Instead, we want AI options that are both public goods and directed toward public good.

We know that existing LLMs are trained on material gathered from the internet, which can reflect racist bias and hate. Companies attempt to filter these data sets, fine-tune LLMs, and tweak their outputs to remove bias and toxicity. But leaked emails and conversations suggest that they are rushing half-baked products to market in a race to establish their own monopoly.

These companies make decisions with huge consequences for democracy, but little democratic oversight. We don’t hear about political trade-offs they are making. Do LLM-powered chatbots and search engines favor some viewpoints over others? Do they skirt controversial topics completely? Currently, we have to trust companies to tell us the truth about the trade-offs they face.

A public option LLM would provide a vital independent source of information and a testing ground for technological choices with big democratic consequences. This could work much like public option health care plans, which increase access to health services while also providing more transparency into operations in the sector and putting productive pressure on the pricing and features of private products. It would also allow us to figure out the limits of LLMs and direct their applications with those in mind.

We know that LLMs often “hallucinate,” inferring facts that aren’t real. It isn’t clear whether this is an unavoidable flaw of how they work, or whether it can be corrected for. Democracy could be undermined if citizens trust technologies that just make stuff up at random, and the companies trying to sell these technologies can’t be trusted to admit their flaws.

But a public option AI could do more than check technology companies’ honesty. It could test new applications that could support democracy rather than undermining it.

Most obviously, LLMs could help us formulate and express our perspectives and policy positions, making political arguments more cogent and informed, whether in social media, letters to the editor, or comments to rule-making agencies in response to policy proposals. By this we don’t mean that AI will replace humans in the political debate, only that they can help us express ourselves. If you’ve ever used a Hallmark greeting card or signed a petition, you’ve already demonstrated that you’re OK with accepting help to articulate your personal sentiments or political beliefs. AI will make it easier to generate first drafts, and provide editing help and suggest alternative phrasings. How these AI uses are perceived will change over time, and there is still much room for improvement in LLMs—but their assistive power is real. People are already testing and speculating on their potential for speechwriting, lobbying, and campaign messaging. Highly influential people often rely on professional speechwriters and staff to help develop their thoughts, and AI could serve a similar role for everyday citizens.

If the hallucination problem can be solved, LLMs could also become explainers and educators. Imagine citizens being able to query an LLM that has expert-level knowledge of a policy issue, or that has command of the positions of a particular candidate or party. Instead of having to parse bland and evasive statements calibrated for a mass audience, individual citizens could gain real political understanding through question-and-answer sessions with LLMs that could be unfailingly available and endlessly patient in ways that no human could ever be.

Finally, and most ambitiously, AI could help facilitate radical democracy at scale. As Carnegie Mellon professor of statistics Cosma Shalizi has observed, we delegate decisions to elected politicians in part because we don’t have time to deliberate on every issue. But AI could manage massive political conversations in chat rooms, on social networking sites, and elsewhere: identifying common positions and summarizing them, surfacing unusual arguments that seem compelling to those who have heard them, and keeping attacks and insults to a minimum.

AI chatbots could run national electronic town hall meetings and automatically summarize the perspectives of diverse participants. This type of AI-moderated civic debate could also be a dynamic alternative to opinion polling. Politicians turn to opinion surveys to capture snapshots of popular opinion because they can only hear directly from a small number of voters, but want to understand where voters agree or disagree.

Looking further into the future, these technologies could help groups reach consensus and make decisions. Early experiments by the AI company DeepMind suggest that LLMs can build bridges between people who disagree, helping bring them to consensus. Science fiction writer Ruthanna Emrys, in her remarkable novel A Half-Built Garden, imagines how AI might help people have better conversations and make better decisions—rather than taking advantage of these biases to maximize profits.

This future requires an AI public option. Building one, through a government-directed model development and deployment program, would require a lot of effort—and the greatest challenges in developing public AI systems would be political.

Some technological tools are already publicly available. In fairness, tech giants like Google and Meta have made many of their latest and greatest AI tools freely available for years, in cooperation with the academic community. Although OpenAI has not made the source code and trained features of its latest models public, competitors such as Hugging Face have done so for similar systems.

While state-of-the-art LLMs achieve spectacular results, they do so using techniques that are mostly well known and widely used throughout the industry. OpenAI has only revealed limited details of how it trained its latest model, but its major advance over its earlier ChatGPT model is no secret: a multi-modal training process that accepts both image and textual inputs.

Financially, the largest-scale LLMs being trained today cost hundreds of millions of dollars. That’s beyond ordinary people’s reach, but it’s a pittance compared to U.S. federal military spending—and a great bargain for the potential return. While we may not want to expand the scope of existing agencies to accommodate this task, we have our choice of government labs, like the National Institute of Standards and Technology, the Lawrence Livermore National Laboratory, and other Department of Energy labs, as well as universities and nonprofits, with the AI expertise and capability to oversee this effort.

Instead of releasing half-finished AI systems for the public to test, we need to make sure that they are robust before they’re released—and that they strengthen democracy rather than undermine it. The key advance that made recent AI chatbot models dramatically more useful was feedback from real people. Companies employ teams to interact with early versions of their software to teach them which outputs are useful and which are not. These paid users train the models to align to corporate interests, with applications like web search (integrating commercial advertisements) and business productivity assistive software in mind.

To build assistive AI for democracy, we would need to capture human feedback for specific democratic use cases, such as moderating a polarized policy discussion, explaining the nuance of a legal proposal, or articulating one’s perspective within a larger debate. This gives us a path to “align” LLMs with our democratic values: by having models generate answers to questions, make mistakes, and learn from the responses of human users, without having these mistakes damage users and the public arena.

Capturing that kind of user interaction and feedback within a political environment suspicious of both AI and technology generally will be challenging. It’s easy to imagine the same politicians who rail against the untrustworthiness of companies like Meta getting far more riled up by the idea of government having a role in technology development.

As Karl Popper, the great theorist of the open society, argued, we shouldn’t try to solve complex problems with grand hubristic plans. Instead, we should apply AI through piecemeal democratic engineering, carefully determining what works and what does not. The best way forward is to start small, applying these technologies to local decisions with more constrained stakeholder groups and smaller impacts.

The next generation of AI experimentation should happen in the laboratories of democracy: states and municipalities. Online town halls to discuss local participatory budgeting proposals could be an easy first step. Commercially available and open-source LLMs could bootstrap this process and build momentum toward federal investment in a public AI option.

Even with these approaches, building and fielding a democratic AI option will be messy and hard. But the alternative—shrugging our shoulders as a fight for commercial AI domination undermines democratic politics—will be much messier and much worse.

This essay was written with Henry Farrell and Nathan Sanders, and previously appeared on Slate.com.

EDITED TO ADD: Linux Weekly News discussion.

As artificial intelligence (AI) technologies become more prevalent in enterprise environments, chatbots like ChatGPT are gaining popularity due to their ability to assist in customer service and support functions. However, while these chatbots offer numerous benefits, there are also significant security risks that enterprises must be aware of.

One of the main risks associated with ChatGPT is the potential for data breaches. These chatbots can be vulnerable to attacks that allow unauthorized access to sensitive data, such as customer information or financial records. Hackers can exploit vulnerabilities in the chatbot’s programming or the underlying platform to gain access to this information.

Another risk associated with ChatGPT is the potential for social engineering attacks. These attacks involve tricking users into providing sensitive information, such as passwords or login credentials. Chatbots like ChatGPT can be used to deliver phishing messages or other types of social engineering attacks to unsuspecting users.

In addition to security risks associated with external threats, ChatGPT can also pose an internal threat to enterprises. Employees could misuse the chatbot to gain access to information they are not authorized to see or to perform unauthorized actions within the enterprise’s systems.

Furthermore, ChatGPT’s use of natural language processing (NLP) technology can create additional security risks. NLP technology allows the chatbot to understand and respond to human language, but it can also be vulnerable to attacks that exploit weaknesses in language processing algorithms. Attackers could use these weaknesses to trick the chatbot into performing unintended actions or revealing sensitive information.

To mitigate these risks, enterprises must implement comprehensive security protocols when using ChatGPT. This can include implementing strong authentication mechanisms, regularly updating and patching the chatbot’s software, and regularly reviewing and monitoring access logs to identify potential threats.

In conclusion, ChatGPT is a powerful tool that can provide significant benefits to enterprises. However, its use also presents significant security risks that must be addressed to protect sensitive data and prevent unauthorized access. By implementing strong security protocols, enterprises can enjoy the benefits of ChatGPT while minimizing the risks associated with its use.

The post The Security Risks of ChatGPT in an Enterprise Environment appeared first on Cybersecurity Insiders.

Everyone's talking juice-jacking - but has anyone ever been juice-jacked? Uber suffers yet another data breach, but it hasn't been hacked. And Carole hosts the "AI-a-go-go or a no-no?" quiz for Dave and Graham. All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by The Cyberwire's Dave Bittner.

Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.

But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that. A decade ago, one type of spam email had become a punchline on every late-night show: “I am the son of the late king of Nigeria in need of your assistance….” Nearly everyone had gotten one or a thousand of those emails, to the point that it seemed everyone must have known they were scams.

So why were scammers still sending such obviously dubious emails? In 2012, researcher Cormac Herley offered an answer: It weeded out all but the most gullible. A smart scammer doesn’t want to waste their time with people who reply and then realize it’s a scam when asked to wire money. By using an obvious scam email, the scammer can focus on the most potentially profitable people. It takes time and effort to engage in the back-and-forth communications that nudge marks, step by step, from interlocutor to trusted acquaintance to pauper.

Long-running financial scams are now known as pig butchering, growing the potential mark up until their ultimate and sudden demise. Such scams, which require gaining trust and infiltrating a target’s personal finances, take weeks or even months of personal time and repeated interactions. It’s a high stakes and low probability game that the scammer is playing.

Here is where LLMs will make a difference. Much has been written about the unreliability of OpenAI’s GPT models and those like them: They “hallucinate” frequently, making up things about the world and confidently spouting nonsense. For entertainment, this is fine, but for most practical uses it’s a problem. It is, however, not a bug but a feature when it comes to scams: LLMs’ ability to confidently roll with the punches, no matter what a user throws at them, will prove useful to scammers as they navigate hostile, bemused, and gullible scam targets by the billions. AI chatbot scams can ensnare more people, because the pool of victims who will fall for a more subtle and flexible scammer—one that has been trained on everything ever written online—is much larger than the pool of those who believe the king of Nigeria wants to give them a billion dollars.

Personal computers are powerful enough today that they can run compact LLMs. After Facebook’s new model, LLaMA, was leaked online, developers tuned it to run fast and cheaply on powerful laptops. Numerous other open-source LLMs are under development, with a community of thousands of engineers and scientists.

A single scammer, from their laptop anywhere in the world, can now run hundreds or thousands of scams in parallel, night and day, with marks all over the world, in every language under the sun. The AI chatbots will never sleep and will always be adapting along their path to their objectives. And new mechanisms, from ChatGPT plugins to LangChain, will enable composition of AI with thousands of API-based cloud services and open source tools, allowing LLMs to interact with the internet as humans do. The impersonations in such scams are no longer just princes offering their country’s riches. They are forlorn strangers looking for romance, hot new cryptocurrencies that are soon to skyrocket in value, and seemingly-sound new financial websites offering amazing returns on deposits. And people are already falling in love with LLMs.

This is a change in both scope and scale. LLMs will change the scam pipeline, making them more profitable than ever. We don’t know how to live in a world with a billion, or 10 billion, scammers that never sleep.

There will also be a change in the sophistication of these attacks. This is due not only to AI advances, but to the business model of the internet—surveillance capitalism—which produces troves of data about all of us, available for purchase from data brokers. Targeted attacks against individuals, whether for phishing or data collection or scams, were once only within the reach of nation-states. Combine the digital dossiers that data brokers have on all of us with LLMs, and you have a tool tailor-made for personalized scams.

Companies like OpenAI attempt to prevent their models from doing bad things. But with the release of each new LLM, social media sites buzz with new AI jailbreaks that evade the new restrictions put in place by the AI’s designers. ChatGPT, and then Bing Chat, and then GPT-4 were all jailbroken within minutes of their release, and in dozens of different ways. Most protections against bad uses and harmful output are only skin-deep, easily evaded by determined users. Once a jailbreak is discovered, it usually can be generalized, and the community of users pulls the LLM open through the chinks in its armor. And the technology is advancing too fast for anyone to fully understand how they work, even the designers.

This is all an old story, though: It reminds us that many of the bad uses of AI are a reflection of humanity more than they are a reflection of AI technology itself. Scams are nothing new—simply intent and then action of one person tricking another for personal gain. And the use of others as minions to accomplish scams is sadly nothing new or uncommon: For example, organized crime in Asia currently kidnaps or indentures thousands in scam sweatshops. Is it better that organized crime will no longer see the need to exploit and physically abuse people to run their scam operations, or worse that they and many others will be able to scale up scams to an unprecedented level?

Defense can and will catch up, but before it does, our signal-to-noise ratio is going to drop dramatically.

This essay was written with Barath Raghavan, and previously appeared on Wired.com.

An ethics group that specializes in technology has lodged a complaint against OpenAI, the de-veloper of ChatGPT, with the Federal Trade Commission (FTC). The group, known as the Centre for AI and Digital Policy’s Complaint (CAIDP), has urged the FTC to block OpenAI from releasing more chatbot versions that utilize AI and machine learning tools like GPT-4, the lat-est AI-based release by OpenAI that generates human-like text. CAIDP contends that GPT-4 is highly invasive, biased, deceptive, and risky to public privacy.

OpenAI acknowledged this issue in November last year, admitting that the technology can be utilized for spreading disinformation and influencing computer networks by breaching cyberse-curity through unconventional cyber warfare. The California-based company stated that the fault does not lie with the software but rather the person using it. Unethical groups could use it to reverse ideologies and worldviews, thereby hindering future discussion, reflection, and improvement.

The FTC has yet to respond to the complaint, but has stated that the usage of AI technology should be transparent and foster liability. Given that GPT-4 does not comply with these requirements, the FTC may impose a ban to safeguard consumers’ rights, following careful evaluation and analysis from a security standpoint.

The post Will US FTC issue ban on use of ChatGPT future versions appeared first on Cybersecurity Insiders.

With the advent of advanced technology, there has been an explosion of innovative solutions that have revolutionized various industries. One such technology that has gained widespread adoption is ChatGPT – an artificial intelligence-powered chatbot that has been created by OpenAI. This chatbot has not only made life easier for people but has also created job opportunities in various industries.

ChatGPT has been designed to interact with people in a conversational manner, making it easy to use for anyone. It has been trained on vast amounts of data and can understand human language, making it capable of answering any questions or providing information on a wide range of topics.

One of the industries that have been impacted by ChatGPT is the customer service industry. Chatbots powered by ChatGPT have been implemented in various companies to handle customer inquiries and support requests. This has resulted in the creation of new job opportunities for chatbot developers, chatbot trainers, and chatbot analysts. These jobs require specialized skills in artificial intelligence, natural language processing, and machine learning, and offer attractive salaries and benefits.

Another industry that has benefited from the adoption of ChatGPT is the education industry. Chatbots powered by ChatGPT have been used to provide personalized learning experiences for students. These chatbots can answer questions on various subjects, provide explanations, and offer interactive quizzes to test the student’s knowledge. As a result, the education industry has seen a rise in the demand for chatbot developers, curriculum designers, and instructional designers.

The healthcare industry has also been transformed by ChatGPT-powered chatbots. These chatbots can help patients schedule appointments, provide medication reminders, and answer questions related to their health. They have also been used to provide mental health support to patients suffering from anxiety or depression. This has created job opportunities for healthcare professionals with expertise in chatbot development, patient care, and mental health counseling.

ChatGPT has also impacted the marketing industry. Chatbots powered by ChatGPT have been used to engage with customers, offer product recommendations, and provide customer support. As a result, marketing professionals with skills in chatbot development, customer engagement, and data analysis have been in high demand.

In conclusion, ChatGPT has not only made life easier for people but has also created job opportunities in various industries. The adoption of ChatGPT-powered chatbots has resulted in the creation of jobs in the customer service, education, healthcare, and marketing industries. As this technology continues to advance, we can expect to see more job opportunities being created, making ChatGPT an exciting technology with significant potential for job creation.

The post Jobs created using ChatGPT appeared first on Cybersecurity Insiders.

ChatGPT released by Microsoft owned OpenAI has been slapped with a temporary ban by Italy government agencies due to data security concerns. Now, on Monday i.e. on April 3rd of the year 2023, Germany Commissioner for data protection told Handelsblatt that it may also follow the footsteps of its neighboring nation and might impose a ban on the usage of the AI driven conversational chat bot until, a thorough investigation is launched on how the AI application is using information for analysis to produce results.

Currently, Germany has sorted the inputs from the government of Italy on ban impose and the evidence behind the ban move.

France and Ireland are also planning to contact the Italian data watchdog to share and discuss their findings on how the chat bot use could raise data security concerns.

Ireland’s Data Protection Commissioner (DPC) has given its staff members a fortnight’s time frame to study and analyze the findings, after which it could also impose a ban on the use of OpenAI product/s.

Sweden has, however, issued a press statement saying it has no plan to ban the much trending ChatGPT, nor is intending to contact the Italian watchdog for its apprehensions over the usage of the said conversational chat technology.

NOTE- According to the European data laws, online services should deploy an automated software mechanism to identify and block those who are below the age of 13 from using the service. Since Microsoft ChatGPT doesn’t opt for such checks, Italy happens to be the first western country to impose a temporary ban on the AI Chatbot that recently reached 100 million monthly active users mark, since its commercial release in November 2022. The move was made after the Italian government was questioned by some elected representatives about the accuracy of info that surfaced last month. Leaking chat titles and payment related info (billing first and last names, last 4 digits of credit cards, the card expiration date and billing addresses) on the screens of some users that weren’t related to the respected leaked accounts in any way.

 

The post After Italy, Germany to issue ban on the ChatGPT use appeared first on Cybersecurity Insiders.