Category: Privacy
License plate scanners aren’t new. Neither is using them for bulk surveillance. What’s new is that AI is being used on the data, identifying “suspicious” vehicle behavior:
Typically, Automatic License Plate Recognition (ALPR) technology is used to search for plates linked to specific crimes. But in this case it was used to examine the driving patterns of anyone passing one of Westchester County’s 480 cameras over a two-year period. Zayas’ lawyer Ben Gold contested the AI-gathered evidence against his client, decrying it as “dragnet surveillance.”
And he had the data to back it up. A FOIA he filed with the Westchester police revealed that the ALPR system was scanning over 16 million license plates a week, across 480 ALPR cameras. Of those systems, 434 were stationary, attached to poles and signs, while the remaining 46 were mobile, attached to police vehicles. The AI was not just looking at license plates either. It had also been taking notes on vehicles’ make, model and color—useful when a plate number for a suspect vehicle isn’t visible or is unknown.
Phone number spoofing involves manipulating caller ID displays to mimic legitimate phone numbers, giving scammers a deceptive veil of authenticity.
Related: The rise of ‘SMS toll fraud’
The Bank of America scam serves as a prime example of how criminals exploit this technique. These scammers impersonate Bank of America representatives, using the genuine bank’s phone number (+18004321000) to gain trust and deceive their targets.
Victims of the Bank of America scam have shared their experiences, shedding light on the deceptive tactics employed by these fraudsters. One common approach involves a caller with an Indian accent posing as a Bank of America representative. They may claim that a new credit card or checking account has been opened in the victim’s name, providing specific details such as addresses and alleged deposits to sound convincing.
Scam tactic exposed
Nicolas Girard shared his experience with the Bank of America scam. He received a call claiming a new checking account was opened in his name, complete with his correct address and a $5,000 deposit. To verify their authenticity, Nicolas asked for proof, but the scammers insisted he Google the Bank of America number.
Suspicious, he trusted his instincts and called the bank directly. Genuine representatives confirmed it was a scam, with no new accounts linked to his social security number. Research unveiled the widespread practice of spoofing the Bank of America number.
Nicolas took immediate action, freezing his credit accounts to protect himself. His story serves as a reminder to stay vigilant against phone scams, ensuring our financial well-being and personal security.
Scope of the threat
Based on monthly search requests and statistics from 2023, it is evident that a significant number of individuals, almost 600 views per month with an estimate of over 6,000 searches in 2023 alone, have encountered the spoofed Bank of America phone number, +18004321000. This statistic alone highlights the alarming and widespread nature of this scam. It serves as a stark reminder of the importance of raising awareness about phone number spoofing and its potential risks.
It is crucial to be aware of the red flags associated with phone scams like the Bank of America scam. Victims have reported several warning signs, such as unsolicited calls, requests for sensitive information, and high-pressure tactics. Recognizing these indicators can help individuals protect themselves from falling victim to such scams.
To combat phone harassment and protect against scams like the Bank of America scam, the tellows caller ID app offers valuable features. This app provides reverse phone number lookup, allowing users to identify potential scammers or suspicious callers. With a vast database of reported numbers and user feedback, the app provides essential information to help individuals make informed decisions about answering or blocking calls.
Practical protection
To safeguard yourself from falling victim to phone number spoofing scams, consider the following preventive measures:
•Verify Caller Authenticity: Independently contact your bank using official contact information to verify the legitimacy of any calls claiming to be from financial institutions.
•Be Wary of Sharing Personal Information: Never share sensitive information, such as account numbers or Social Security numbers, over the phone unless you initiated the call and are confident in the caller’s identity.
•Install tellows Caller ID App: Use the tellows caller ID app to identify potential scam calls and protect yourself from phone harassment. The app’s reverse phone number lookup feature provides insights into caller reputation and user-reported experiences.
By using the tellows app, users can identify and block unwanted and potentially scam calls. With its extensive global database and user-generated ratings, tellows provides insights into caller identities and their reputation. This empowers users to make informed decisions about answering or blocking calls, saving them time and frustration.
Phone number spoofing poses a growing threat. Stay vigilant and informed to protect against such fraud.
About the essayist: Richard Grant is a country content manager at tellows. He is responsible for overseeing the content strategy, user-generated ratings and data management for a specific country. Richard’s expertise in call identification and spam detection contributes to tellows’ mission of empowering individuals to avoid annoying and potentially fraudulent calls.
This is why we need regulation:
Zoom updated its Terms of Service in March, spelling out that the company reserves the right to train AI on user data with no mention of a way to opt out. On Monday, the company said in a blog post that there’s no need to worry about that. Zoom execs swear the company won’t actually train its AI on your video calls without permission, even though the Terms of Service still say it can.
Of course, these are Terms of Service. They can change at any time. Zoom can renege on its promise at any time. There are no rules, only the whims of the company as it tries to maximize its profits.
It’s a stupid way to run a technological revolution. We should not have to rely on the benevolence of for-profit corporations to protect our rights. It’s not their job, and it shouldn’t be.
LAS VEGAS – Just when we appeared to be on the verge of materially shrinking the attack surface, along comes an unpredictable, potentially explosive wild card: generative AI.
Related: Can ‘CNAPP’ do it all?
Unsurprisingly, generative AI was in the spotlight at Black Hat USA 2023, which returned to its full pre-Covid grandeur here last week.
Maria Markstedter, founder of Azeria Labs, set the tone in her opening keynote address. Artificial intelligence has been in commercial use for many decades; Markstedter recounted why this potent iteration of AI is causing so much fuss, just now.
Generative AI makes use of a large language model (LLM) – an advanced algorithm that applies deep learning techniques to massive data sets. The popular service, ChatGPT, is based on OpenAI’s LLM, which taps into everything available across the Internet through 2021, plus anything a user cares to feed into it. Generative AI ingests it all, then applies algorithms to understand, generate and predict new content – in text-based summaries that any literate human can grasp.
I spoke to technologists, hackers, marketers, company founders, researchers, academics, publicists and fellow journalists about the promise and pitfalls of commoditizing AI in this fashion. I came away with a much better understanding of the disruption/transformation that is gaining momentum, with respect to privacy and cybersecurity.
Shadow IT on steroids
Generative AI, in point of fact, has, for the moment, dramatically accelerated attack surface expansion. I spoke with Casey Ellis, founder of Bugcrowd, which supplies crowd-sourced vulnerability testing, all about this. We discussed how elite hacking collectives already are finding ways to use it as a force multiplier, streamlining repetitive tasks and enabling them to scale up their intricate, multi-staged attacks.
What’s more, generative AI has exacerbated the longstanding problem of well-intentioned employees unwittingly creating dangerous new exposures, especially in hybrid and multi-cloud networks. I spoke with Uy Huynh, vice president of solutions engineering at Island.io, about how generative AI has quickly become like BYOD and Shadow IT on steroids. Island supplies an advanced web browser security solution.
“The days of localized data loss is over,” says Huynh. “With ChatGPT, when you post sensitive content as part of a query, it subsequently makes its way to OpenAI, the underlying LLM. Every piece of information becomes a part of the model’s vast knowledge base. This unintentional leakage can have dire consequences, as sensitive information can thereafter be accessed through the right prompts.”
Of course, the good guys aren’t asleep at the wheel. Another theme that stood out at Black Hat: security innovators are, at this moment, creating and testing new ways to leverage generative AI – as a force multiplier – for their respective security specialties.
Threat intelligence vendor Cybersixgill for instance launched Cybersixgill IQ at Black Hat. This new service feeds vast data sets of threat intel into a customized LLM tuned to generate answers to nuanced security questions.
The idea is to shrink the time analysts spend sifting through data, says Brad Liggett, director of global sales engineering. Cybersixgill’s researchers, for instance, are finding they can quickly gain insights they might have missed or taken much longer to uncover.
This all really boils down to intuitive questioning of generative AI by clever human experts. Bugcrowds’ stable of independent white hat hackers, for instance, are probing for the edges of the envelope, striving to determine where usefulness ends and inaccuracy kicks in, Ellis told me.
Defense-in-depth redux
I also spoke just ahead of the conference with Horizon3.ai, Syxsense and Trustle – and we touched on how they are factoring in generative AI; for a deeper dive, please give a listen to my podcasts discussions with each. At the conference, I had deep conversations with experts from Bugcrowd, Island.io, Traceable.ai, Data Theorem, Sonar and Flexxon; stay tuned for upcoming Last Watchdog podcasts with each.
Generative AI is sure to rivet everyone’s attention for some time to come. When it comes to cybersecurity, Markstedter, the keynote presenter, astutely observed how generative AI is on track to match the original iPhone’s adoption trajectory: massive popularity followed by an extended period of companies scrambling to gain security equilibrium.
“Do you remember the first version of the iPhone? It was so insecure — everything was running as root. It was riddled with critical bugs. It lacked exploit mitigations or sandboxing,” she said. “That didn’t stop us from pushing out the functionality and for businesses to become part of that ecosystem.”
Cybersecurity is undergoing a tectonic shift, folks. To get us where we need to be, traditional, perimeter-centric IT defenses need to be reconstituted and security services delivery models need to be reshaped. A new tier of overlapping, interoperable, highly automated security platforms are taking shape. Defense-in-depth remains a mantra, but one that is morphing into something altogether new.
Automation and interoperability must take over and several new security layers must coalesce and interweave to address attack surface expansion. Generative AI has come along as a two-edged sword, accelerating attack surface expansion, but also stirring cybersecurity innovation. In short, the arms race has taken on a critical new dimension.
Cutting against the grain
A few off-the-cuff discussions I had on the exhibits floor at Black Hat resonated. One was with Saryu Nayyar, CEO of Gurucul, supplier of a unified security and risk analysis solution. Gurucul, too, launched a “generative AI assistant” at Black Hat and has been in the vanguard of another major trend: competing to shape the multi-faceted security platforms we’ll need to carry us forward.
“We’ve always had a vision, right from the beginning, of suppling a unified, open platform,” Nayyar told me. “Our data ingestion framework supports over one thousand-plus integrations. . . Our biggest differentiator is our threat content. We use machine learning, and we have a large research team producing threat content that’s all use-case driven, content that can be used for proactive response and proactive risk reduction.”
I also had a fascinating chat with Jonathan Desrocher and Ian Amit, co-founders of Gomboc.ai, which emerged from stealth at Black Hat with a $5 million seed funding round and a strikingly unique solution. With generative AI all the rage, Gomboc is tapping into what Amit and Desrocher characterized as the polar opposite – “deterministic AI.”
Gomboc’s innovation appears to be a simplified way to drag-and-drop robust security policy onto cloud IT resources, such as AWS processing and storage. Instead of using generative AI to guess, based on information about the feature sets it can see, determinisitic AI runs through a series of predetermined checks, then applies reasoning to conclude whether a cloud asset is securely configured; it either is, or it isn’t, Desrocher told me.
Baked-in security
“It’s deterministic and it also changes the focus of what you’re modeling,” he says. “Do you model past behavior and try to extract rules to predict the future? Or are you actually modeling the problem domain to understand the physics of how it works, so that you can predict the future based on the laws of nature, if you will.”
Fresh out of stealth mode, Gomboc has a ways to go to prove it can gain traction. Amit and Desrocher, of course, have high hopes to make a big difference.
Here’s what Amit told me: “Over the medium term, we’re going to change the way that security is being managed for cloud infrastructure. And in the long term, we’re going to change the way that cloud infrastructure, in general, is being managed . . . our policy engine can also be applied to performance, cost and resilience so that DevOps won’t need to inundate themselves with those intricacies of finding the correct parameters to make things run correctly. Security is going to be baked into the way you deploy your architecture.”
Along these same lines, I had a deep conversation with Camellia Chan, co-founder and CEO of Flexxon, a Singapore-based hardware vendor that’s also cutting against the grain. Chan walked me through how Flexxon has won partnerships with Lenovo, HP and other OEMs to embed Flexxon solid state memory drives in new laptops. Branded “X-Phy,” these advanced SSDs contain AI-infused mechanisms that provide a last line security check, she told me. A full drill down is coming in my podcast discussion with Chan, so stay tuned.
The transformation progresses. I’ll keep watch and keep reporting.
Acohido
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(LW provides consulting services to the vendors we cover.)