Social media platforms can quickly become perilous if users neglect fundamental cyber hygiene practices. This concern is particularly relevant for Facebook users, as an alarming malvertising campaign is currently underway that disseminates SYS01Stealer malware.

Presently, Facebook is the epicenter of two significant malicious campaigns. The first involves the distribution of malware aimed at infiltrating Facebook accounts and capturing user credentials. The second campaign focuses on account takeovers, where hackers gain unauthorized access to user accounts and promote fictitious products and services. Under the guise of raising funds for a family member’s medical expenses or educational bills, these fraudulent activities exploit the trust inherent in social media, transforming Facebook from a space for connection into a breeding ground for scams.

According to Miley Waluch, a freelance cybersecurity expert affiliated with a law enforcement agency in Israel, hackers employ various tactics to lure unsuspecting users. They post malicious links to pages advertising car sales, game sales, adult content, smartphone deals, and furniture sales—all with enticing offers of substantial discounts. This bait tempts users to click, ultimately leading to the theft of sensitive information, including Facebook account credentials and credit card details, which can result in unauthorized withdrawals from bank accounts.

In the past eleven months, Meta, the parent company of Facebook, has received over 68 complaints regarding hacked accounts being exploited for fraudulent purposes. Meanwhile, Google reports that users of Facebook have conducted more than 120,000 searches related to assistance for hacked accounts within the past year.

In light of these threats, users are strongly encouraged to enhance their account security through multi-factor authentication methods such as two-factor authentication (2FA), biometric verification, or facial recognition. These measures not only help curtail the spread of fraud but also protect account holders from becoming embroiled in controversies or financial losses.

With the U.S. 2024 elections just a week away, there is heightened concern that hackers may exploit the names of political figures like Kamala Harris and Donald Trump to solicit donations under the pretense of charitable causes or campaign funding. Users, especially those engaging on Facebook Marketplace, are urged to remain vigilant against these schemes and to avoid clicking on links from unknown sources, especially those masquerading as friend requests or offering products at unrealistically low prices.

The post Facebook alerts users about the ongoing Malvertising Campaign appeared first on Cybersecurity Insiders.

In episode 18 of "The AI Fix" our hosts discover that OpenAI's Advanced Voice mode is too emotional for Europeans, a listener writes a Viking saga about LinkedIn, ChatGPT is a terrible doctor, and the voice of Meta AI takes to Meta's platforms to complain about Meta AI reading things people post on Meta's platforms. Mark discovers what Darth Vader really said on Cloud City, Graham rummages through ChatGPT's false memories, and our hosts find out why AIs need an inner critic. All this and much more is discussed in the latest edition of "The AI Fix" podcast by Graham Cluley and Mark Stockley.

A California man accused of failing to pay taxes on tens of millions of dollars allegedly earned from cybercrime also paid local police officers hundreds of thousands of dollars to help him extort, intimidate and silence rivals and former business partners, a new indictment charges. KrebsOnSecurity has learned that many of the man’s alleged targets were members of UGNazi, a hacker group behind multiple high-profile breaches and cyberattacks back in 2012.

A photo released by the government allegedly showing Iza posing with several LASD officers on his payroll.

An indictment (PDF) unsealed last week said the Federal Bureau of Investigation (FBI) has been investigating Los Angeles resident Adam Iza. Also known as “Assad Faiq” and “The Godfather,” Iza is the 30-something founder of a cryptocurrency investment platform called Zort that advertised the ability to make smart trades based on artificial intelligence technology.

But the feds say investors in Zort soon lost their shorts, after Iza and his girlfriend began spending those investments on Lamborghinis, expensive jewelry, vacations, a $28 million home in Bel Air, even cosmetic surgery to extend the length of his legs.

The indictment states the FBI started looking at Iza after receiving multiple reports that he had on his payroll several active deputies with the Los Angeles Sheriff’s Department (LASD). Iza’s attorney did not immediately respond to requests for comment.

The complaint cites a letter from an attorney for a victim referenced only as “E.Z.,” who was seeking help related to an extortion and robbery allegedly committed by Iza. The government says that in March 2022, three men showed up at E.Z.’s home, and tried to steal his laptop in an effort to gain access to E.Z. cryptocurrency holdings online. A police report referenced in the complaint says three intruders were scared off when E.Z. fired several handgun rounds in the direction of his assailants.

The FBI later obtained a copy of a search warrant executed by LASD deputies in January 2022 for GPS location information on a phone belonging to E.Z., which shows an LASD deputy unlawfully added E.Z.’s mobile number to a list of those associated with an unrelated firearms investigation.

“Damn my guy actually filed the warrant,” Iza allegedly texted someone after the location warrant was entered. “That’s some serious shit to do for someone….risking a 24 years career. I pay him 280k a month for complete resources. They’re active-duty.”

The FBI alleges LASD officers had on several previous occasions tried to kidnap and extort E.Z. at Iza’s behest. The complaint references a November 2021 incident wherein Iza and E.Z. were in a car together when Iza asked to stop and get snacks at a convenience store. While they were still standing next to the car, a van with several armed LASD deputies showed up and tried to force E.Z. to hand over his phone. E.Z. escaped unharmed, and alerted 911.

E.Z. appears to be short for Enzo Zelocchi, a self-described “actor” who was featured in an ABC News story about a home invasion in Los Angeles around that same time, in which Zelocchi is quoted as saying at least two men tried to rob him at gunpoint (we’ll revisit Zelocchi’s acting credits in a moment).

One of many self portraits published on the Instagram account of Enzo Zelocchi.

The indictment makes frequent references to a co-conspirator of Iza (“CC-1”) — his girlfriend at the time — who allegedly helped Iza run his businesses and spend the millions plunked down by Zort investors. We know what E.Z. stands for because Iza’s girlfriend then was a woman named Iris Au, and in November 2022 she sued Zelocchi for allegedly stealing Iza’s laptop.

Iza’s indictment says he also harassed a man identified only as T.W., and refers to T.W. as one of two Americans currently incarcerated in the Philippines for murder. In December 2018, a then 21-year-0ld Troy Woody Jr. was arrested in Manilla after he was spotted dumping the body of his dead girlfriend Tomi Masters into a local river.

Woody is accused of murdering Masters with the help of his best friend and roommate at the time: Mir Islam, a.k.a. “JoshTheGod,” referred to in the Iza complaint as “M.I.” Islam and Woody were both core members of UGNazi, a hacker collective that sprang up in 2012 and claimed credit for hacking and attacking a number of high-profile websites.

In June 2016, Islam was sentenced to a year in prison for an impressive array of crimes, including stalking people online and posting their personal data on the Internet. Islam also pleaded guilty to reporting dozens of phony bomb threats and fake hostage situations at the homes of celebrities and public officials (Islam participated in a swatting attack against this author in 2013).

Troy Woody Jr. (left) and Mir Islam, are currently in prison in the Philippines for murder.

In December 2022, Troy Woody Jr. sued Iza, Zelocchi and Zort, alleging (PDF) Iza and Zelocchi were involved in a 2018 home invasion at his residence, wherein Woody claimed his assailants stole laptops and phones containing more than $200 million in cryptocurrencies.

Woody’s complaint states that Masters also was present during his 2018 home invasion, as was another core UGNazi member: Eric “CosmoTheGod” Taylor. CosmoTheGod rocketed to Internet infamy in 2013 when he and a number of other hackers set up the Web site exposed[dot]su, which published the address, Social Security numbers and other personal information of public figures, including the former First Lady Michelle Obama, the then-director of the FBI and the U.S. attorney general. The group also swatted many of the people they doxed.

Exposed was built with the help of identity information obtained and/or stolen from ssndob dot ru.

In 2017, Taylor was sentenced to three years probation for participating in multiple swatting attacks, including the one against my home in 2013.

Iza’s indictment says the FBI interviewed Woody in Manilla where he is currently incarcerated, and learned that Iza has been harassing him about passwords that would unlock access to cryptocurrencies. The FBI’s complaint leaves open the question of how Woody and Islam got the phones in the first place, but the implication is that Iza may have instigated the harassment by having mobile phones smuggled to the prisoners.

The government suggests its case against Iza was made possible in part thanks to Iza’s propensity for ripping off people who worked for him. The indictment cites information provided by a private investigator identified only as “K.C.,” who said Iza hired him to surveil Zelocchi but ultimately refused to pay him for much of the work.

K.C. stands for Kenneth Childs, who in 2022 sued Iris Au and Zort (PDF) for theft by deception and commercial disparagement, after it became clear his private eye services were being used as part of a scheme by the Zort founders to intimidate and extort others. Childs’ complaint says Iza ultimately clawed back tens of thousands of dollars in payments he’d previously made as part of their contract.

The government also included evidence provided by an associate of Iza’s — named only as “R.C.” — who was hired to throw a party at Iza’s home. According to the feds, Iza paid the associate $50,000 to craft the event to his liking, but on the day of the party Iza allegedly told R.C. he was unhappy with the event and demanded half of his money back.

When R.C. balked, Iza allegedly surrounded the man with armed LASD officers, who then extracted the payment by seizing his phone. The indictment claims Iza kept R.C.’s phone and spent the remainder of his bank balance.

A photo Iza allegedly sent to Tassilo Heinrich immediately after Heinrich’s arrest on unsubstantiated drug charges.

The FBI said that after the incident at the party, Iza had his bribed sheriff deputies to pull R.C. over and arrest him on phony drug charges. The complaint includes a photo of R.C. being handcuffed by the police, which the feds say Iza sent to R.C. in order to intimidate him even further. The drug charges were later dismissed for lack of evidence.

The government alleges Iza and Au paid the LASD officers using Zelle transfers from accounts tied to two different entities incorporated by one or both of them: Dream Agency and Rise Agency. The complaint further alleges that these two entities were the beneficiaries of a business that sold hacked and phished Facebook advertising accounts, and bribed Facebook employees to unblock ads that violated its terms of service.

The complaint says Iza ran this business with another individual identified only as “T.H.,” and that at some point T.H. had some personal problems and checked himself into rehab. T.H. told the FBI that Iza responded by stealing his laptop and turning his associate in to the government.

KrebsOnSecurity has learned that T.H. in this case is Tassilo Heinrich, a man indicted in 2022 for hacking into the e-commerce platform Shopify, and leaking the user database for Ledger, a company that makes hardware wallets for storing cryptocurrencies.

Heinrich pleaded guilty in 2022 and was sentenced to time served, three years of supervised release, and ordered to pay restitution to Shopify. Upon his release from custody, Heinrich told the FBI that Iza was still using his account at the public screenshot service Gyazo to document communications regarding his alleged bribing of LASD officers.

Prosecutors say Iza and Au portrayed themselves as glamorous and wealthy individuals who were successful social media influencers, but that most of that was a carefully crafted facade designed to attract investment from cryptocurrency enthusiasts. Meanwhile, the U.K. tabloids reported this summer that Au was dating Davide Sanclimenti, the 2022 co-winner on the dating reality show Love Island.

Au was featured on the July 2024 cover of “Womenpreneur Middle East.”

Recall that we promised to revisit Mr. Zelocchi’s claimed acting credits. Despite being briefly listed on the Internet Movie Data Base (imdb.com) as the most awarded science fiction actor of all time, it’s not clear whether Mr. Zelocchi has starred in any real movies.

Earlier this year, an Internet sleuth on Youtube showed that even though Zelocchi’s IMDB profile has him earning more awards than most other actors on the platform (here he is holding a Youtube top viewership award), Zelocchi is probably better known as the director of the movie once rated the absolute worst sci-fi flick on IMDB: A 2015 work called “Angel’s Apocalypse.” Most of the video shorts on Zelocchi’s Instagram page appear to be short clips, some of which look more like a commercial for men’s cologne than a clip from a real movie.

A Reddit post from a year ago calling attention to Zelocchi’s sci-fi film Angel’s Apocalypse somehow earning more audience votes than any other movie in the same genre.

In many ways, the crimes described in this indictment and the various related civil lawsuits would prefigure a disturbing new trend within English-speaking cybercrime communities that has bubbled up in the past few years: The emergence of “violence-as-as-service” offerings that allow cybercriminals to anonymously extort and intimidate their rivals.

Found on certain Telegram channels are solicitations for IRL or “In Real Life” jobs, wherein people hire themselves out as willing to commit a variety of physical attacks in their local geographic area, such as slashing tires, firebombing a home, or tossing a brick through someone’s window.

Many of the cybercriminals in this community have stolen tens of millions of dollars worth of cryptocurrency, and can easily afford to bribe police officers. KrebsOnSecurity would expect to see more of this in the future as young, crypto-rich cybercriminals seek to corrupt people in authority to their advantage.

Facebook Faces Data Breach Concerns

Facebook, the social media giant founded by Mark Zuckerberg, has once again found itself under scrutiny due to reports of a significant data breach. A recent disclosure by the India-based non-profit organization known as the ‘CyberPeace Team’ revealed that data belonging to over 100,000 users has surfaced on an information-sharing forum.

The leaked data comprises sensitive user information including names, profiles, email addresses, contact details, and locations. Such a breach raises serious concerns regarding potential phishing scams and other social engineering attacks that exploit this information.

The exact origin of the data breach remains unclear, as does the geographic distribution of the affected users. However, speculations regarding these aspects often prompt government investigations and can tarnish the company’s reputation. Notably, in 2021, the Ireland Data Protection Commission imposed a significant penalty on Facebook‘s parent company, Meta, for a massive data leak affecting over 533 million users.

Akira Ransomware Emerges, Prompts FBI Alert

The emergence of a new ransomware variant known as Akira Ransomware has sent shockwaves across the cybersecurity landscape, particularly in Singapore, where businesses have become targets. The Singaporean government has responded by issuing advisories urging local businesses not to entertain ransom demands from hackers.

Authorities stress that paying a ransom does not guarantee the provision of decryption keys or prevent the public disclosure of stolen data. Moreover, hackers may exploit the situation by repeatedly extorting organizations, as exemplified by the recent case involving Change Healthcare, which paid a staggering $22 million to the ALPHV or BlackCat ransomware group, only to face renewed threats from another group known as RansomHub.

In response to the escalating threat, the FBI has issued a public warning regarding the Akira ransomware gang’s modus operandi. Instead of directly contacting victims after encrypting their databases, the gang leaves a contact email address in a pop-up note displayed post-encryption, adding another layer of complexity to the ransomware landscape.

The post News about Facebook Data Breach and FBI alert on Akira Ransomware appeared first on Cybersecurity Insiders.

Facebook users need to be on high alert as a new phishing scam has emerged, disguising itself as a website hosted by a reputable company but ultimately leading to a deceptive advertisement aimed at stealing sensitive information. This scam, operating under the guise of Facebook, is currently proliferating on Google and poses a significant threat by attempting to pilfer valuable data such as bank passwords and email addresses.

Cybersecurity expert Justin Poli was among the first to uncover this fraudulent scheme masquerading as Facebook, which facilitates the unauthorized extraction of personal information from unsuspecting online users under the pretext of a social media webpage.

In theory, companies vying for top rankings on Google are expected to adhere to strict guidelines prohibiting any malicious practices detrimental to online users. However, it appears that certain entities are exploiting loopholes, with internet giants placing advertisements in the names of reputable companies at the forefront of search engine results, only to deceive users and harvest their credentials.

In many instances, the administrators behind these ad campaigns are afforded special privileges, such as the ability to alter URLs even after the ads have been published—a capability exploited by cybercriminals to perpetrate their schemes.

In response to these threats, Google has issued a warning and asserted that its monitoring teams are diligently working to root out such malicious advertising campaigns. Moreover, recognizing the escalating sophistication of hackers, the tech behemoth is harnessing the power of artificial intelligence to fortify its efforts in providing a secure online environment for users seeking services.

Concurrently, online users are strongly advised against clicking on links sourced from dubious online platforms, including emails, SMS messages, and the initial search engine results.

It’s noteworthy that a study conducted by Deloitte has revealed that individuals belonging to Generation Z (aged between 14 and 26) are more susceptible to falling victim to such scams compared to older generations, such as baby boomers (aged between 58 and 76). This underscores the importance of raising awareness and implementing robust cybersecurity measures across all demographics.

The post Google Facebook ads are deceptive and information stealing appeared first on Cybersecurity Insiders.

Take That's Gary Barlow chats up a pizza-slinging granny from Essex via Facebook, or does he? And a scam takes a sinister turn - for both the person being scammed and an innocent participant - in Ohio. All this and more is discussed in the latest edition of the "Smashing Security" podcast by cybersecurity veterans Graham Cluley and Carole Theriault.

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviors.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal gray area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

This essay was written with Nathan Sanders, and was originally published in MIT Technology Review.

Shortly after millions of Facebook and Instagram users encountered difficulties accessing their accounts, speculation quickly arose that a state-funded cyberattack might be to blame.

Mark Zuckerberg, fresh from a vacation in India, promptly took to Twitter, now X, to assure users that his security teams were diligently investigating the issue. He swiftly dismissed the notion of a cyberattack, citing Meta’s robust security measures designed to prevent such incidents.

The outage, which lasted from 10 am ET to 12 pm ET, is estimated to have cost the tech giant a staggering $100 million. With a potential impact on over 500,000 users globally, the outage posed a significant setback for the internet behemoth, which heavily relies on ad revenues and stock prices to maintain its market dominance.

Dav Ives, managing director at Wedbush Securities in New York, corroborated the financial impact, noting that the losses could have been even more severe had Meta not implemented prompt business continuity measures.

Curiously, the outage occurred just a day after Meta CEO Mark Zuckerberg and his wife Priscilla Chan made headlines for their fascination with a $10 million watch owned by Anant Ambani, the younger son of Reliance Industries Chairman Mukesh Ambani. An extravagant pre-marriage celebration hosted by Ambani at Vantara, a National Initiative dedicated to animal welfare, had garnered attention in the media.

It’s worth noting that the outage was attributed to a technical glitch on the servers related to the application programming interface, as revealed by an engineer associated with the company on Reddit.

The post Facebook and Instagram down by Cyber Attack appeared first on Cybersecurity Insiders.

Heaven's above! Scammers are exploiting online funerals, and Lockbit - the "Walmart of Ransomware" - is dismantled in style by cyber cops. All this and more is discussed in the latest edition of the “Smashing Security” podcast by cybersecurity veterans Graham Cluley and Carole Theriault.
Holy mackerel! AI is jumping on the religion bandwagon, ransomware gangs target hospitals, and what's happened to your old mobile phone number? All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by cybersecurity veterans Graham Cluley and Carole Theriault, joined this week by "Ransomware Sommelier" Allan Liska.