I have mixed feelings about this class-action lawsuit against OpenAI and Microsoft, claiming that it “scraped 300 billion words from the internet” without either registering as a data broker or obtaining consent. On the one hand, I want this to be a protected fair use of public data. On the other hand, I want us all to be compensated for our uniquely human ability to generate language.

There’s an interesting wrinkle on this. A recent paper showed that using AI generated text to train another AI invariably “causes irreversible defects.” From a summary:

The tails of the original content distribution disappear. Within a few generations, text becomes garbage, as Gaussian distributions converge and may even become delta functions. We call this effect model collapse.

Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale. Indeed, we already see AI startups hammering the Internet Archive for training data.

This is the same idea that Ted Chiang wrote about: that ChatGPT is a “blurry JPEG of all the text on the Web.” But the paper includes the math that proves the claim.

What this means is that text from before last year—text that is known human-generated—will become increasingly valuable.

UPS delivers some smishing advice (but have they kept something under wraps?), we ask ChatGPT to take a long hard look at itself, and we debate what the penalty should be for taking national secrets home with you. All this and much much more is discussed in the latest edition of the “Smashing Security” podcast by cybersecurity veterans Graham Cluley and Carole Theriault, joined this week by Host Unknown’s sole founder Thom Langford.

As an AI language model like ChatGPT, you cannot directly earn money. However, there are a few ways you can potentially utilize ChatGPT to generate income indirectly:

Content Generation: You can use ChatGPT to assist you in generating content for various purposes, such as writing articles, creating blog posts, or drafting social media posts. If you have a client or a platform that pays for content creation, you can leverage ChatGPT’s capabilities to enhance your productivity and potentially increase your earnings.

Writing Assistance: ChatGPT can help with proofreading, editing, or generating ideas for written materials. You can offer your services as a writer/editor, and use ChatGPT to improve your efficiency and deliver higher-quality work. This could enable you to take on more clients or charge higher rates for your services.

Virtual Assistant: Position yourself as a virtual assistant and utilize ChatGPT to handle customer inquiries, provide information, or support various tasks. This could involve managing appointments, responding to emails, or engaging with customers on behalf of a business. You can charge clients for your time and expertise while leveraging ChatGPT to streamline your operations.

Language Tutoring: If you are proficient in multiple languages, you can offer language tutoring services. You can utilize ChatGPT to help create lesson plans, practice conversations, or provide explanations. By combining your language skills with ChatGPT’s assistance, you can enhance your tutoring capabilities and potentially attract more students.

Chatbot Development: Use ChatGPT to develop chatbots for businesses or websites. Many companies are interested in deploying AI-powered chatbots to handle customer inquiries or provide support. You can leverage ChatGPT’s language capabilities to build and customize chatbot solutions, and earn money by offering your services to businesses.

Remember, when offering services that involve AI-generated content, it’s important to disclose to your clients or users that they are interacting with an AI and manage their expectations accordingly. Additionally, consider the ethical implications and guidelines surrounding the use of AI in various contexts.

The post How to earn money using AI based ChatGPT appeared first on Cybersecurity Insiders.

ChatGPT, the conversational bot developed by OpenAI and now owned by Microsoft has hit the news headlines for wrong reasons. A senior government official from UAE has alleged that the chat-based platform is being used by criminals to launch phishing and ransomware attacks.

“It’s become a trend to use technology for cyberwarfare and we have investigated it along with our partners and discovered that our adversaries have already started using it”, said Mohammad Al Kuwaiti, Cybersecurity Head, UAE.

Mr. Kuwaiti added that hackers are using ChatGPT to reprogram and add ransomware scripts. They are also using the platform to write phishing emails and launching them with a 63% suc-cess rate.

Delivering a keynote at the 6th CSIS Cybersecurity Innovation Series Conference in Dubai, Mr. Kuwaiti believed that the new tech might do more harm than good in near future. As the aim of the criminals was to use artificial intelligence to launch attacks on critical infrastructure such as electricity, energy, transportation, aviation, education, and healthcare.

NOTE 1– Here the technology cannot be put to fault, as everything depends on the mind that’s using it. If it is used for a good cause, then it can yield relative results and vice versa.

NOTE 2- Those against the development of artificial intelligence-based technologies like Tesla Chief Elon Musk have expressed their concern against the use of ChatGPT and requested the Redmond based company (Microsoft) to pause all developments related to AI till some rules related to the usage and development are drafted benefitting the mankind.

The post ChatGPT used to launch phishing and ransomware attacks appeared first on Cybersecurity Insiders.

height="315" class="aligncenter size-full wp-image-292324" /> ChatGPT hallucinations cause turbulence in court, a riot in Wales may have been ignited on social media, and do you think .MOV is a good top-level domain for "a website that moves you"? All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Mark Stockley. Plus don't miss our featured interview with David Ahn of Centripetal.

Google has released its new AI chat service dubbed Bard in over 180 countries, with 15 more to follow by the end of next month. Bard is nothing but a Google owned and a sure-shot competitor to Microsoft-owned OpenAI-developed ChatGPT service that can answer anything and everything.

But there’s more to the release of the Alphabet Inc. company, and here’s some knowledge to share about it:

1.) ChatGPT offers answers from its own data repository, but Google Bard can offer answers sieved from the internet, giving users a great chance to find relevant information.

2.) In the coming months, Bard will be integrated with Gmail, allowing users to write emails, search emails, and use it for multiple purposes. On the other hand, ChatGPT doesn’t allow integration with Gmail or its own Outlook.

3.) In its beta version, Bard allows users to import files from Gmail and Google Docs, making it easy for them to utilize the generated content.

4.) ChatGPT doesn’t allow voice prompts, but Google’s new chat-based service does, making it easy for users to inquire using their voice rather than the typical typing format.

5.) OpenAI’s chatbot, ‘ChatGPT,’ was trained to answer questions using the repository it built until September 2021. However, Bard can answer questions with the latest content generated until April this year, providing professionals and students with the latest information from the AI-based conversational chatbot.

6.) Bard can explain coding in about 20 programming languages as soon as the user shares a link. However, as ChatGPT is not connected to the internet, it cannot explain programs from a link.

NOTE: Bard is available in Korean, Japanese, and Indian Hindi languages and will soon support 40 more languages. However, the reliability factor is currently missing in both machine learning models.

The post Things ChatGPT cannot but Google Bard can do appeared first on Cybersecurity Insiders.

ChatGPT, the AI-based chatbot developed by Microsoft, can answer anything and everything. However, can you imagine that chatbot assistance is also being used to create malware and its various mutations? Threat Intelligence company ‘WithSecure’ has discovered this activity and raised a red alert immediately.

Tim West, the head of WestSecure, believes that the creation of malware through artificial intelligence will increase challenges for defenders.

As the software is readily available without any limitations, malicious actors can deliberately use it, just as they are currently using remote access tools to break into corporate networks, said Mr. West.

Stephen Robinson, the senior threat intelligence analyst at the company, stated that cyber criminals will start evolving their spread methodology of malware by duplicating activities carried out by legitimate businesses and distributing malware through masqueraded emails and messages.

One such instance was observed online a couple of weeks ago when cyber criminals were caught spreading malicious software tools through Facebook, WhatsApp, and Instagram platforms. The security teams from the Meta-owned social media platform identified at least 10 malware families generated from ChatGPT and using messaging platforms to spread malware.

In one such case, the threat actors created browser extensions that claimed to offer the services of the AI platform. However, they were actually tricking people into downloading malware like DuckTail, which has the capability to steal information from victims’ Facebook login sessions, including 2FA, location data, and other account details.

Initially, Vietnam threat actors were suspected to be behind the incident. However, Cisco Talos, which had been tracking the hackers since September 2022, confirmed that the attack was the work of either Chinese or Russian hackers who were obscuring their activity to appear as if it originated from Vietnam.

NOTE: As I always say, it is not the software that is at fault. Instead, it is the human mind that needs to be held responsible, as people can use AI-based technology for both creative and destructive purposes.

The post ChatGPT now generates Malware mutations appeared first on Cybersecurity Insiders.

Interesting essay on the poisoning of LLMs—ChatGPT in particular:

Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don’t know if ChatGPT has been safely managed.

They’ll also have to update their training data set at some point. They can’t leave their models stuck in 2021 forever.

Once they do update it, we only have their word—pinky-swear promises—that they’ve done a good enough job of filtering out keyword manipulations and other training data attacks, something that the AI researcher El Mahdi El Mhamdi posited is mathematically impossible in a paper he worked on while he was at Google.

In case you don’t have enough to worry about, someone has built a credible handwriting machine:

This is still a work in progress, but the project seeks to solve one of the biggest problems with other homework machines, such as this one that I covered a few months ago after it blew up on social media. The problem with most homework machines is that they’re too perfect. Not only is their content output too well-written for most students, but they also have perfect grammar and punctuation ­ something even we professional writers fail to consistently achieve. Most importantly, the machine’s “handwriting” is too consistent. Humans always include small variations in their writing, no matter how honed their penmanship.

Devadath is on a quest to fix the issue with perfect penmanship by making his machine mimic human handwriting. Even better, it will reflect the handwriting of its specific user so that AI-written submissions match those written by the student themselves.

Like other machines, this starts with asking ChatGPT to write an essay based on the assignment prompt. That generates a chunk of text, which would normally be stylized with a script-style font and then output as g-code for a pen plotter. But instead, Devadeth created custom software that records examples of the user’s own handwriting. The software then uses that as a font, with small random variations, to create a document image that looks like it was actually handwritten.

Watch the video.

My guess is that this is another detection/detection avoidance arms race.