AnyDesk, a widely-used platform for remote access software, fell victim to a ransomware attack, exposing its source code and private code sign keys to hackers. The enterprise software company detected malicious activity within its computer networks on a Friday afternoon and promptly initiated remediation efforts.

Although the identity of the threat actor remains officially undisclosed, AnyDesk staff confirmed the incident as a ransomware attack and pledged to share more details as the investigation unfolds. Reports suggest that the infiltration may have occurred on January 29th of the current year, with identification taking place on February 2nd. In response, the company promptly disabled user login access.

There are suspicions that a state-funded actor, Midnight Blizzard, may be behind the incident, potentially linked to Russian Intelligence.

Meanwhile, content delivery services provider Cloudflare revealed that its corporate computer network had been targeted by cybercriminals around Thanksgiving the previous year. The company disclosed that the attack leveraged stolen passwords obtained during the Okta data breach in October 2023.

Preliminary investigations by security experts from CrowdStrike indicated successful access to the company’s AWS environment and Atlassian Jira and Confluence modules. However, they were unable to breach the Cloudflare dashboard and other instances of Okta’s software.

As a precautionary measure, Cloudflare tested over 5000 systems and replaced 15 in its Sao Paulo Data Center, although experts have not confirmed whether these systems were compromised in the incident.

The primary objective of the cyber attack appears to be straightforward—gather intelligence and share it with interested parties, including state-funded actors and competitors.

The post AnyDesk hit by ransomware and Cloudflare hacked appeared first on Cybersecurity Insiders.

Microsoft today issued security updates for more than 100 newly-discovered vulnerabilities in its Windows operating system and related software, including four flaws that are already being exploited. In addition, Apple recently released emergency updates to quash a pair of zero-day bugs in iOS.

Apple last week shipped emergency updates in iOS 17.0.3 and iPadOS 17.0.3 in response to active attacks. The patch fixes CVE-2023-42724, which attackers have been using in targeted attacks to elevate their access on a local device.

Apple said it also patched CVE-2023-5217, which is not listed as a zero-day bug. However, as Bleeping Computer pointed out, this flaw is caused by a weakness in the open-source “libvpx” video codec library, which was previously patched as a zero-day flaw by Google in the Chrome browser and by Microsoft in Edge, Teams, and Skype products. For anyone keeping count, this is the 17th zero-day flaw that Apple has patched so far this year.

Fortunately, the zero-days affecting Microsoft customers this month are somewhat less severe than usual, with the exception of CVE-2023-44487. This weakness is not specific to Windows but instead exists within the HTTP/2 protocol used by the World Wide Web: Attackers have figured out how to use a feature of HTTP/2 to massively increase the size of distributed denial-of-service (DDoS) attacks, and these monster attacks reportedly have been going on for several weeks now.

Amazon, Cloudflare and Google all released advisories today about how they’re addressing CVE-2023-44487 in their cloud environments. Google’s Damian Menscher wrote on Twitter/X that the exploit — dubbed a “rapid reset attack” — works by sending a request and then immediately cancelling it (a feature of HTTP/2). “This lets attackers skip waiting for responses, resulting in a more efficient attack,” Menscher explained.

Natalie Silva, lead security engineer at Immersive Labs, said this flaw’s impact to enterprise customers could be significant, and lead to prolonged downtime.

“It is crucial for organizations to apply the latest patches and updates from their web server vendors to mitigate this vulnerability and protect against such attacks,” Silva said. In this month’s Patch Tuesday release by Microsoft, they have released both an update to this vulnerability, as well as a temporary workaround should you not be able to patch immediately.”

Microsoft also patched zero-day bugs in Skype for Business (CVE-2023-41763) and Wordpad (CVE-2023-36563). The latter vulnerability could expose NTLM hashes, which are used for authentication in Windows environments.

“It may or may not be a coincidence that Microsoft announced last month that WordPad is no longer being updated, and will be removed in a future version of Windows, although no specific timeline has yet been given,” said Adam Barnett, lead software engineer at Rapid7. “Unsurprisingly, Microsoft recommends Word as a replacement for WordPad.”

Other notable bugs addressed by Microsoft include CVE-2023-35349, a remote code execution weakness in the Message Queuing (MSMQ) service, a technology that allows applications across multiple servers or hosts to communicate with each other. This vulnerability has earned a CVSS severity score of 9.8 (10 is the worst possible). Happily, the MSMQ service is not enabled by default in Windows, although Immersive Labs notes that Microsoft Exchange Server can enable this service during installation.

Speaking of Exchange, Microsoft also patched CVE-2023-36778,  a vulnerability in all current versions of Exchange Server that could allow attackers to run code of their choosing. Rapid7’s Barnett said successful exploitation requires that the attacker be on the same network as the Exchange Server host, and use valid credentials for an Exchange user in a PowerShell session.

For a more detailed breakdown on the updates released today, see the SANS Internet Storm Center roundup. If today’s updates cause any stability or usability issues in Windows, AskWoody.com will likely have the lowdown on that.

Please consider backing up your data and/or imaging your system before applying any updates. And feel free to sound off in the comments if you experience any difficulties as a result of these patches.

Presently sponsored by: Unpatched devices keeping you up at night? Kolide can get your entire fleet updated in days. It's Device Trust for Okta. Watch the demo!

Fighting API Bots with Cloudflare's Invisible Turnstile

There's a "hidden" API on HIBP. Well, it's not "hidden" insofar as it's easily discoverable if you watch the network traffic from the client, but it's not meant to be called directly, rather only via the web app. It's called "unified search" and it looks just like this:

Fighting API Bots with Cloudflare's Invisible Turnstile

It's been there in one form or another since day 1 (so almost a decade now), and it serves a sole purpose: to perform searches from the home page. That is all - only from the home page. It's called asynchronously from the client without needing to post back the entire page and by design, it's super fast and super easy to use. Which is bad. Sometimes.

To understand why it's bad we need to go back in time all the way to when I first launched the API that was intended to be consumed programmatically by other people's services. That was easy, because it was basically just documenting the API that sat behind the home page of the website already, the predecessor to the one you see above. And then, unsurprisingly in retrospect, it started to be abused so I had to put a rate limit on it. Problem is, that was a very rudimentary IP-based rate limit and it could be circumvented by someone with enough IPs, so fast forward a bit further and I put auth on the API which required a nominal payment to access it. At the same time, that unified search endpoint was created and home page searches updated to use that rather than the publicly documented API. So, 2 APIs with 2 different purposes.

The primary objective for putting a price on the public API was to tackle abuse. And it did - it stopped it dead. By attaching a rate limit to a key that required a credit card to purchase it, abusive practices (namely enumerating large numbers of email addresses) disappeared. This wasn't just about putting a financial cost to queries, it was about putting an identity cost to them; people are reluctant to start doing nasty things with a key traceable back to their own payment card! Which is why they turned their attention to the non-authenticated, non-documented unified search API.

Let's look at a 3 day period of requests to that API earlier this year, keeping in mind this should only ever be requested organically by humans performing searches from the home page:

Fighting API Bots with Cloudflare's Invisible Turnstile

This is far from organic usage with requests peaking at 121.3k in just 5 minutes. Which poses an interesting question: how do you create an API that should only be consumed asynchronously from a web page and never programmatically via a script? You could chuck a CAPTCHA on the front page and require that be solved first but let's face it, that's not a pleasant user experience. Rate limit requests by IP? See the earlier problem with that. Block UA strings? Pointless, because they're easily randomised. Rate limit an ASN? It gets you part way there, but what happens when you get a genuine flood of traffic because the site has hit the mainstream news? It happens.

Over the years, I've played with all sorts of combinations of firewall rules based on parameters such as geolocations with incommensurate numbers of requests to their populations, JA3 fingerprints and, of course, the parameters mentioned above. Based on the chart above these obviously didn't catch all the abusive traffic, but they did catch a significant portion of it:

Fighting API Bots with Cloudflare's Invisible Turnstile

If you combine it with the previous graph, that's about a third of all the bad traffic in that period or in other words, two thirds of the bad traffic was still getting through. There had to be a better way, which brings us to Cloudflare's Turnstile:

With Turnstile, we adapt the actual challenge outcome to the individual visitor or browser. First, we run a series of small non-interactive JavaScript challenges gathering more signals about the visitor/browser environment. Those challenges include, proof-of-work, proof-of-space, probing for web APIs, and various other challenges for detecting browser-quirks and human behavior. As a result, we can fine-tune the difficulty of the challenge to the specific request and avoid ever showing a visual puzzle to a user.

"Avoid ever showing a visual puzzle to a user" is a polite way of saying they avoid the sucky UX of CAPTCHA. Instead, Turnstile offers the ability to issue a "non-interactive challenge" which implements the sorts of clever techniques mentioned above and as it relates to this blog post, that can be an invisible non-interactive challenge. This is one of 3 different widget types with the others being a visible non-interactive challenge and a non-intrusive interactive challenge. For my purposes on HIBP, I wanted a zero-friction implementation nobody saw, hence the invisible approach. Here's how it works:

Fighting API Bots with Cloudflare's Invisible Turnstile

Get it? Ok, let's break it down further as it relates to HIBP, starting with when the front page first loads and it embeds the Turnstile widget from Cloudflare:

<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>

The widget takes responsibility for running the non-interactive challenge and returning a token. This needs to be persisted somewhere on the client side which brings us to embedding the widget:

<div ID="turnstileWidget" class="cf-turnstile" data-sitekey="0x4AAAAAAADY3UwkmqCvH8VR" data-callback="turnstileCompleted"></div>

Per the docs in that link, the main thing here is to have an element with the "cf-turnstile" class set on it. If you happen to go take a look at the HIBP HTML source right now, you'll see that element precisely as it appears in the code block above. However, check it out in your browser's dev tools so you can see how it renders in the DOM and it will look more like this:

Fighting API Bots with Cloudflare's Invisible Turnstile

Expand that DIV tag and you'll find a whole bunch more content set as a result of loading the widget, but that's not relevant right now. What's important is the data-token attribute because that's what's going to prove you're not a bot when you run the search. How you implement this from here is up to you, but what HIBP does is picks up the token and sets it in the "cf-turnstile-response" header then sends it along with the request when that unified search endpoint is called:

Fighting API Bots with Cloudflare's Invisible Turnstile

So, at this point we've issued a challenge, the browser has solved the challenge and received a token back, now that token has been sent along with the request for the actual resource the user wanted, in this case the unified search endpoint. The final step is to validate the token and for this I'm using a Cloudflare worker. I've written a lot about workers in the past so here's the short pitch: it's code that runs in each one of Cloudflare's 300+ edge nodes around the world and can inspect and modify requests and responses on the fly. I already had a worker to do some other processing on unified search requests, so I just added the following:

const token = request.headers.get('cf-turnstile-response');

if (token === null) {
    return new Response('Missing Turnstile token', { status: 401 });
}

const ip = request.headers.get('CF-Connecting-IP');

let formData = new FormData();
formData.append('secret', '[secret key goes here]');
formData.append('response', token);
formData.append('remoteip', ip);

const turnstileUrl = 'https://challenges.cloudflare.com/turnstile/v0/siteverify';
const result = await fetch(turnstileUrl, {
    body: formData,
    method: 'POST',
});
const outcome = await result.json();

if (!outcome.success) {
    return new Response('Invalid Turnstile token', { status: 401 });
}

That should be pretty self-explanatory and you can find the docs for this on Cloudflare's server-side validation page which goes into more detail, but in essence, it does the following:

  1. Gets the token from the request header and rejects the request if it doesn't exist
  2. Sends the token, your secret key and the user's IP along to Turnstile's "siteverify" endpoint
  3. If the token is not successfully verified then return 401 "Unauthorised", otherwise continue with the request

And because this is all done in a Cloudflare worker, any of those 401 responses never even touch the origin. Not only do I not need to process the request in Azure, the person attempting to abuse my API gets a nice speedy response directly from an edge node near them 🙂

So, what does this mean for bots? If there's no token then they get booted out right away. If there's a token but it's not valid then they get booted out at the end. But can't they just take a previously generated token and use that? Well, yes, but only once:

If the same response is presented twice, the second and each subsequent request will generate an error stating that the response has already been consumed.

And remember, a real browser had to generate that token in the first place so it's not like you can just automate the process of token generation then throw it at the API above. (Sidenote: that server-side validation link includes how to handle idempotency, for example when retrying failed requests.) But what if a real human fails the verification? That's entirely up to you but in HIBP's case, that 401 response causes a fallback to a full page post back which then implements other controls, for example an interactive challenge.

Time for graphs and stats, starting with the one in the hero image of this page where we can see the number of times Turnstile was issued and how many times it was solved over the week prior to publishing this post:

Fighting API Bots with Cloudflare's Invisible Turnstile

That's a 91% hit rate of solved challenges which is great. That remaining 9% is either humans with a false positive or... bots getting rejected 😎

More graphs, this time how many requests to the unified search page were rejected by Turnstile:

Fighting API Bots with Cloudflare's Invisible Turnstile

That 990k number doesn't marry up with the 476k unsolved ones from before because they're 2 different things: the unsolved challenges are when the Turnstile widget is loaded but not solved (hopefully due to it being a bot rather than a false positive), whereas the 401 responses to the API is when a successful (and previously unused) Turnstile token isn't in the header. This could be because the token wasn't present, wasn't solved or had already been used. You get more of a sense of how many of these rejected requests were legit humans when you drill down into attributes like the JA3 fingerprints:

Fighting API Bots with Cloudflare's Invisible Turnstile

In other words, of those 990k failed requests, almost 40% of them were from the same 5 clients. Seems legit 🤔

And about a third were from clients with an identical UA string:

Fighting API Bots with Cloudflare's Invisible Turnstile

And so on and so forth. The point being that the number of actual legitimate requests from end users that were inconvenienced by Turnstile would be exceptionally small, almost certainly a very low single-digit percentage. I'll never know exactly because bots obviously attempt to emulate legit clients and sometimes legit clients look like bots and if we could easily solve this problem then we wouldn't need Turnstile in the first place! Anecdotally, that very small false positive number stacks up as people tend to complain pretty quickly when something isn't optimal, and I implemented this all the way back in March. Yep, 5 months ago, and I've waited this long to write about it just to be confident it's actually working. Over 100M Turnstile challenges later, I'm confident it is - I've not seen a single instance of abnormal traffic spikes to the unified search endpoint since rolling this out. What I did see initially though is a lot of this sort of thing:

Fighting API Bots with Cloudflare's Invisible Turnstile

By now it should be pretty obvious what's going on here, and it should be equally obvious that it didn't work out real well for them 😊

The bot problem is a hard one for those of us building services because we're continually torn in different directions. We want to build a slick UX for humans but an obtrusive one for bots. We want services to be easily consumable, but only in the way we intend them to... which might be by the good bots playing by the rules!

I don't know exactly what Cloudflare is doing in that challenge and I'll be honest, I don't even know what a "proof-of-space" is. But the point of using a service like this is that I don't need to know! What I do know is that Cloudflare sees about 20% of the internet's traffic and because of that, they're in an unrivalled position to look at a request and make a determination on its legitimacy.

If you're in my shoes, go and give Turnstile a go. And if you want to consume data from HIBP, go and check out the official API docs, the uh, unified search doesn't work real well for you any more 😎

Presently sponsored by: Kolide ensures only secure devices can access your cloud apps. It's Device Trust tailor-made for Okta. Book a demo today.

To Infinity and Beyond, with Cloudflare Cache Reserve

What if I told you... that you could run a website from behind Cloudflare and only have 385 daily requests miss their cache and go through to the origin service?

To Infinity and Beyond, with Cloudflare Cache Reserve

No biggy, unless... that was out of a total of more than 166M requests in the same period:

To Infinity and Beyond, with Cloudflare Cache Reserve

Yep, we just hit "five nines" of cache hit ratio on Pwned Passwords being 99.999%. Actually, it was 99.9998% but we're at the point now where that's just splitting hairs, let's talk about how we've managed to only have two requests in a million hit the origin, beginning with a bit of history:

Ah, memories 😊 Back then, Pwned Passwords was serving way fewer requests in a month than what we do in a day now and the cache hit ratio was somewhere around 92%. Put another way, instead of 2 in every million requests hitting the origin it was 85k. And we were happy with that! As the years progressed, the traffic grew and the caching model was optimised so our stats improved:

And that's pretty much where we levelled out, at about the 99-and-a-bit percent mark. We were really happy with that as it was now only 5k requests per million hitting the origin. There was bound to be a number somewhere around that mark due to the transient nature of cache and eviction criteria inevitably meaning a Cloudflare edge node somewhere would need to reach back to the origin website and pull a new copy of the data. But what if Cloudflare never had to do that unless explicitly instructed to do so? I mean, what if it just stayed in their cache unless we actually changed the source file and told them to update their version? Welcome to Cloudflare Cache Reserve:

To Infinity and Beyond, with Cloudflare Cache Reserve

Ok, so I may have annotated the important bit but that's what it feels like - magic - because you just turn it on and... that's it. You still serve your content the same way, you still need the appropriate cache headers and you still have the same tiered caching as before, but now there's a "cache reserve" sitting between that and your origin. It's backed by R2 which is their persistent data store and you can keep your cached things there for as long as you want. However, per the earlier link, it's not free:

To Infinity and Beyond, with Cloudflare Cache Reserve

You pay based on how much you store for how long, how much you write and how much you read. Let's put that in real terms and just as a brief refresher (longer version here), remember that Pwned Passwords is essentially just 16^5 (just over 1 million) text files of about 30kb each for the SHA-1 hashes and a similar number for the NTLM ones (albeit slight smaller file sizes). Here are the Cache Reserve usage stats for the last 9 days:

To Infinity and Beyond, with Cloudflare Cache Reserve

We can now do some pretty simple maths with that and working on the assumption of 9 days, here's what we get:

To Infinity and Beyond, with Cloudflare Cache Reserve

2 bucks a day 😲 But this has taken nearly 16M requests off my origin service over this period of time so I haven't paid for the Azure Function execution (which is cheap) nor the egress bandwidth (which is not cheap). But why are there only 16M read operations over 9 days when earlier we saw 167M requests to the API in a single day? Because if you scroll back up to the "insert magic here" diagram, Cache Reserve is only a fallback position and most requests (i.e. 99.52% of them) are still served from the edge caches.

Note also that there are nearly 1M write operations and there are 2 reasons for this:

  1. Cache Reserve is being seeded with source data as requests come in and miss the edge cache. This means that our cache hit ratio is going to get much, much better yet as not even half all the potentially cacheable API queries are in Cache Reserve. It also means that the 48c per day cost is going to come way down 🙂
  2. Every time the FBI feeds new passwords into the service, the impacted file is purged from cache. This means that there will always be write operations and, of course, read operations as the data flows to the edge cache and makes corresponding hits to the origin service. The prevalence of all this depends on how much data the feds feed in, but it'll never get to zero whilst they're seeding new passwords.

An untold number of businesses rely on Pwned Passwords as an integral part of their registration, login and password reset flows. Seriously, the number is "untold" because we have no idea who's actually using it, we just know the service got hit three and a quarter billion times in the last 30 days:

To Infinity and Beyond, with Cloudflare Cache Reserve

Giving consumers of the service confidence that not only is it highly resilient, but also massively fast is essential to adoption. In turn, more adoption helps drive better password practices, less account takeovers and more smiles all round 😊

As those remaining hash prefixes populate Cache Reserve, keep an eye on the "cf-cache-status" response header. If you ever see a value of "MISS" then congratulations, you're literally one in a million!

Full disclosure: Cloudflare provides services to HIBP for free and they helped in getting Cache Reserve up and running. However, they had no idea I was writing this blog post and reading it live in its entirety is the first anyone there has seen it. Surprise! 👋

Presently sponsored by: Kolide ensures only secure devices can access your cloud apps. It's Device Trust tailor-made for Okta. Book a demo today.

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

I found myself going down a previously unexplored rabbit hole recently, or more specifically, what I thought was "a" rabbit hole but in actual fact was an ever-expanding series of them that led me to what I refer to in the title of this post as "6 rabbits deep". It's a tale of firewalls, APIs and sifting through layers and layers of different services to sniff out the root cause of something that seemed very benign, but actually turned out to be highly impactful. Let's go find the rabbits!

The Back Story

When you buy an API key on Have I Been Pwned (HIBP), Stripe handles all the payment magic. I love Stripe, it's such an awesome service that abstracts away so much pain and it's dead simple to integrate via their various APIs. It's also dead simple to configure Stripe to send notices back to your own service via webhooks. For example, when an invoice is paid or a customer is updated, Stripe sends information about that event to HIBP and then lists each call on the webhooks dashboard in their portal:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

There are a whole range of different events that can be listened to and webhooks fired, here we're seeing just a couple of them that are self explanatory in name. When an invoice is paid, the callback looks something like this:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

HIBP has received this call and updated it's own DB such that for a new customer, they can now retrieve an API key or for an existing customer whose subscription has renewed, the API key validity period has been extended. The same callback is also issued when someone upgrades an API key, for example when going from 10RPM (requests per minute) to 50RPM. It's super important that HIBP gets that callback so it can appropriately upgrade the customer's key and they can immediately begin making more requests. When that call doesn't happen, well, let's go down the first rabbit hole.

The Failed API Key Upgrade 🐰

This should never happen:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

This came in via HIBP's API key support portal and is pretty self-explanatory. I checked the customer's account on Stripe and it did indeed show an active 50RPM subscription, but when drilling down into the associated payment, I found the following:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Ok, so at least I know where things have started to go wrong, but why? Over to the webhooks dashboard and into the failed payments and things look... suboptimal:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Dammit! Fortunately this is only a small single-digit percentage of all callbacks, but every time this fails it's either stopping someone like the guy above from making the requests they've paid for or potentially, causing someone's API key to expire even though they've paid for it. The latter in particular I was really worried about as it would nuke their key and whatever they'd built on top of it would cease to function. Fortunately, because that's such an impactful action I'd built in heaps of buffer for just such an occurrence and I'd gotten onto this issue quickly, but it was disconcerting all the same.

So, what's happening? Well, the response is HTTP 403 "Forbidden" and the body is clearly a Cloudflare challenge page so something at their end is being triggered. Looks like it's time to go down the next rabbit hole.

Cloudflare's Firewall and Logs 🐰 🐰

Desperate just to quickly restore functionality, I dropped into Cloudflare's WAF and allowed all Stripe's outbound IPs used for webhooks to bypass their security controls:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

This wasn't ideal, but it only created risk for requests originating from Stripe and it got things up and running again quickly. With time up my sleeve I could now delve deeper and work out precisely what was going on, starting with the logs. Cloudflare has a really extensive set of APIs that can control a heap of features of the service, including pulling back logs (note: this is a feature of their Enterprise plan). I queried out a slice of the logs corresponding to when some of the 403s from Stripe's dashboard occurred and found 2 entries similar to this one:

{"BotScore":1,"BotScoreSrc":"Verified Bot","CacheCacheStatus":"unknown","ClientASN":16509,"ClientCountry":"us","ClientIP":"54.187.205.235","ClientRequestHost":"haveibeenpwned.com","ClientRequestMethod":"POST","ClientRequestReferer":"","ClientRequestURI":"[redacted]","ClientRequestUserAgent":"Stripe/1.0 (+https://stripe.com/docs/webhooks)","EdgeRateLimitAction":"","EdgeResponseStatus":403,"EdgeStartTimestamp":1674073983931000000,"FirewallMatchesActions":["managedChallenge"],"FirewallMatchesRuleIDs":["6179ae15870a4bb7b2d480d4843b323c"],"FirewallMatchesSources":["firewallManaged"],"OriginResponseStatus":0,"WAFAction":"unknown","WorkerSubrequest":false}

That's one of Stripe's outbound IP's on 54.187.205.235 and the "FirewallMatchesRuleIDs" collection has a value in it. Ergo, something about this request triggered the firewall and caused it to be challenged. I'm sure many of us have gone through the following thought process before:

What did I change?

Did I change anything?

Did they change something?

Except "they" could have been either Cloudflare or Stripe; if it wasn't me (and I was fairly certain it wasn't), was it a Cloudflare change to the rules or a Stripe change to a webhook payload that was now triggering an existing rule? Time to dig deeper again so it's over to the Cloudflare dashboard and down into the WAF events for requests to the webhook callback path:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Yep, something proper broke! Let's drill deeper and look at recent events for that IP:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

As you dig deeper through troubleshooting exercises like this, you gradually turn up more and more information that helps piece the entire puzzle together. In this case, it looks like the "Inbound Anomaly Score Exceeded" rule was being triggered. What's that? And why? Time to go down another rabbit hole.

The Cloudflare OWASP Core Ruleset 🐰 🐰 🐰

So, deeper and deeper down the rabbit holes we go, this time into the depths of the requests that triggered the managed rule:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Well that's comprehensive 🙂

There's a lot to unpack here so let's begin with the ruleset that the previously identified "Inbound Anomaly Score Exceeded" rule belongs to, the Cloudflare OWASP Core Ruleset:

The Cloudflare OWASP Core Ruleset is Cloudflare’s implementation of the OWASP ModSecurity Core Rule SetOpen external link (CRS). Cloudflare routinely monitors for updates from OWASP based on the latest version available from the official code repository.

That link is yet another rabbit hole altogether so let me summarise succinctly here: Cloudflare uses OWASP's rules to identify anomalous traffic based on a customer-defined paranoia level (how strict you want to be) and then applies a score threshold (also customer-defined) at which an action will be taken, for example challenging the request. What I learned as this saga progressed is that the "Inbound Anomoly Score Exceeded" rule is actually a rollup of the rules beneath it. The OWASP score of "26" is the sum of the 6 rules listed beneath it and once it exceeds 25, the superset rule is triggered.

Further - and this is the really important bit - Cloudflare routinely updates the rules from OWASP which makes sense because these are ever-evolving in response to new threats. And when did they last upgrade the rules? It looks like they announced it right before I started having issues:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Whilst it's not entirely clear from above when this release was scheduled to occur, I did reach out to Cloudflare support and was advised it had already taken place:

Please note that we did bump the OWASP version, which we are integrating with to 3.3.4 as noted on our scheduled changes.

So maybe it's not Cloudflare's fault or Stripe's fault, but OWASP's fault? In fairness to all, I don't think it's anyone's fault per se and is instead just an unfortunate result of everyone doing their best to keep the bad guys out. Unless... it really is Stripe's fault because there's something in the request payload that was always fishy and is now being caught? But why for only some requests and not others? Next rabbit!

Cloudflare Payload Logging 🐰 🐰 🐰 🐰

Sometimes, people on the internet lose their minds a bit over things they really shouldn't. One of those things, in my experience, is Cloudflare's interception of traffic and it's something I wrote about in detail nearly 7 years ago now in my piece on security absolutism. Cloudflare plays an enormously valuable role in the internet's ecosystem and a substantial part of the value comes from being able to inspect, cache, optimise, and yes, even reject traffic. When you use Cloudflare to protect your website, they're applying rulesets like the aforementioned OWASP ones and in order to do that, they must be able to inspect your traffic! But they don't log it, not all of it, rather just "metadata generated by our products" as they refer to it on their logs page. We saw an example of that earlier on with Stripe's request from their IP showing it triggered a firewall rule, but what we didn't see is the contents of that POST request, the actual payload that triggered the rule. Let's go grab that.

Because the contents of a POST request can contain sensitive information, Cloudflare doesn't log it. Obviously they see it in transit (that's how OWASP's rules can be applied to it), but it's not stored anywhere and even if you want to capture it, they don't want to be able to see it. That's where payload logging (another Enterprise plan feature) comes in and what's really neat about that is every payload must be encrypted with a public key retained by Cloudflare whilst only you retain the private key. The setup looks like this:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Pretty self-explanatory and once done, right under where we previously saw the additional logs we now have the ability to decrypt the payload:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

As promised, this requires the private key from earlier:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

And now, finally, we have the actual payload that triggered the rule, seen here with my own test data:

[ " },\n \"billing_reason\": \"subscription_update\",\n \"charge\": null,\n \"collection_method\": \"charge_automatically\",\n \"created\": 1674351619,\n \"currency\": \"usd\",\n \"custom_fields\": null,\n \"customer\": \"cus_MkA71FpZ7XXRlt\",\n \"customer_address\": ", " },\n \"customer_email\": \"troy-hunt+1@troyhunt.com\",\n \"customer_name\": \"Troy Hunt 1\",\n \"customer_phone\": null,\n \"customer_shipping\": null,\n \"customer_tax_exempt\": \"none\",\n \"customer_tax_ids\": [\n\n ],\n \"default_payment_method\": null,\n \"default_source\": null,\n \"default_tax_rates\": [\n\n ],\n \"description\": \"You can manage your subscription (i.e. cancel it or regenerate the API key) at any time by verifying your email address here: https://haveibeenpwned.com/API/Key\",\n \"discount\": null,\n \"discounts\": [\n\n ],\n \"due_date\": null,\n \"ending_balance\": -11804,\n \"footer\": null,\n \"from_invoice\": null,\n \"hosted_invoice_url\": \"https://invoice.stripe.com/i/acct_1EdQYpEF14jWlYDw/test_YWNjdF8xRWRRWXBFRjE0aldsWUR3LF9OREo5SlpqUFFvVnFtQnBVcE91YUFXemtkRHFpQWNWLDY0ODkyNDIw02004bEyljdC?s=ap\",\n \"invoice_pdf\": \"https://pay.stripe.com/invoice/acct_1EdQYpEF14jWlYDw/test_YWNjdF8xRWRRWXBFRjE0aldsWUR3LF9OREo5SlpqUFFvVnFtQnBVcE91YUFXemtkRHFpQWNWLDY0ODkyNDIw02004bEyljdC/pdf?s=ap\",\n \"last_finalization_error\": null,\n \"latest_revision\": null,\n \"lines\": ", " ", " ],\n \"discountable\": false,\n \"discounts\": [\n\n ],\n \"invoice_item\": \"ii_1MSsXfEF14jWlYDwB1nfZvFm\",\n \"livemode\": false,\n \"metadata\": ", " },\n \"period\": ", " },\n \"plan\": ", " },\n \"nickname\": null,\n \"product\": \"prod_Mk4eLcJ7JYF02f\",\n \"tiers_mode\": null,\n \"transform_usage\": null,\n \"trial_period_days\": null,\n \"usage_type\": \"licensed\"\n },\n \"price\": ", " },\n \"nickname\": null,\n \"product\": \"prod_Mk4eLcJ7JYF02f\",\n \"recurring\": ", " },\n \"tax_behavior\": \"unspecified\",\n \"tiers_mode\": null,\n \"transform_quantity\": null,\n \"type\": \"recurring\",\n \"unit_amount\": 15000,\n \"unit_amount_decimal\": \"15000\"\n },\n \"proration\": true,\n \"proration_details\": ", " \"il_1MMjfcEF14jWlYDwoe7uhDPF\"\n ]\n }\n },\n \"quantity\": 1,\n \"subscription\": \"sub_1MMjfcEF14jWlYDwi8JWFcxw\",\n \"subscription_item\": \"si_N6xapJ8gSXdp7W\",\n \"tax_amounts\": [\n\n ],\n \"tax_rates\": [\n\n ],\n \"type\": \"invoiceitem\",\n \"unit_amount_excluding_tax\": \"-14304\"\n },\n ", " ],\n \"discountable\": true,\n \"discounts\": [\n\n ],\n \"livemode\": false,\n \"metadata\": ", " },\n \"period\": ", " },\n \"plan\": ", " },\n \"nickname\": null,\n \"product\": \"prod_Mk4lTSl4axd9mt\",\n \"tiers_mode\": null,\n \"transform_usage\": null,\n \"trial_period_days\": null,\n \"usage_type\": \"licensed\"\n },\n \"price\": ", " },\n \"nickname\": null,\n \"product\": \"prod_Mk4lTSl4axd9mt\",\n \"recurring\": ", " },\n \"tax_behavior\": \"unspecified\",\n \"tiers_mode\": null,\n \"transform_quantity\": null,\n \"type\": \"recurring\",\n \"unit_amount\": 2500,\n \"unit_amount_decimal\": \"2500\"\n },\n \"proration\": false,\n \"proration_details\": ", " },\n \"quantity\": 1,\n \"subscription\": \"sub_1MMjfcEF14jWlYDwi8JWFcxw\",\n \"subscription_item\": \"si_NDJ98tQrCcviJf\",\n \"tax_amounts\": [\n\n ],\n \"tax_rates\": [\n\n ],\n \"type\": \"subscription\",\n \"unit_amount_excluding_tax\": \"2500\"\n }\n ],\n \"has_more\": false,\n \"total_count\": 2,\n \"url\": \"/v1/invoices/in_1MSsXfEF14jWlYDwxHKk4ASA/lines\"\n },\n \"livemode\": false,\n \"metadata\": ", " },\n \"next_payment_attempt\": null,\n \"number\": \"04FC1917-0008\",\n \"on_behalf_of\": null,\n \"paid\": true,\n \"paid_out_of_band\": false,\n \"payment_intent\": null,\n \"payment_settings\": ", " },\n \"period_end\": 1674351619,\n \"period_start\": 1674351619,\n \"post_payment_credit_notes_amount\": 0,\n \"pre_payment_credit_notes_amount\": 0,\n \"quote\": null,\n \"receipt_number\": null,\n \"rendering_options\": null,\n \"starting_balance\": 0,\n \"statement_descriptor\": null,\n \"status\": \"paid\",\n \"status_transitions\": ", " },\n \"subscription\": \"sub_1MMjfcEF14jWlYDwi8JWFcxw\",\n \"subtotal\": -11804,\n \"subtotal_excluding_tax\": -11804,\n \"tax\": null,\n \"test_clock\": null,\n \"total\": -11804,\n \"total_discount_amounts\": [\n\n ],\n \"total_excluding_tax\": -11804,\n \"total_tax_amounts\": [\n\n ],\n \"transfer_data\": null,\n \"webhooks_delivered_at\": 1674351619\n }\n },\n \"livemode\": false,\n \"pending_webhooks\": 1,\n \"request\": ", " },\n \"type\": \"invoice.paid\"\n}" ]

But enough of what's present in the payload, it's what's absent that especially struck me. No obvious XSS patterns, nor SQL injection or any other suspicious looking strings. The request looked totally benign, so why did it trigger the rule?

I wanted to compare the payload of a blocked request with a similar request that wasn't blocked, but they're only logged at Cloudflare when they trigger a rule. No problem, it's easy to grab the full request from Stripe's webhook history so I found one that passed and one that failed and diff'd them both:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

This clearly isn't the full 200 lines, but it's a very similar story over the remainder of the files; tiny differences largely down to dates, IDs, and of course, the customers themselves. No suspicious patterns, no funky characters, nothing visibly abnormal. It's a bit pointless to even mention it because they're near identical, but the payload on the left is the one that passed the firewall whilst the payload on the right was blocked.

Next rabbit hole!

Cloudflare's Internal Rules Engine 🐰 🐰 🐰 🐰 🐰

Completely running out of ideas and options, focus moved to the folks inside Cloudflare who were already aware there was an issue:

What followed was a period of back and forth initially with Cloudflare, then Stripe as well with everyone trying to nut out exactly where things were going wrong. Essentially, the process went like this:

Is Cloudflare inadvertently blocking the requests?

Is the OWASP ruleset raising false positives?

Is Stripe issuing requests that are deemed to be malicious?

And round and round we went. At one time, Cloudflare identified a change in the OWASP ruleset which appeared to have resulted in their implementation inadvertently triggering the WAF. They rolled it back and... the same thing happened. We deferred back to Stripe on the assumption that something must have changed on their end, but they couldn't identify any change that would have any sort of material impact. We were stumped, but we also had an easy fix just one last rabbit hole away...

Fine Tuning the Cloudflare WAF 🐰 🐰 🐰 🐰 🐰 🐰

The joy of a managed firewall is that someone else takes all the rigmarole of looking after it away. I'm going to talk more about that in the summary shortly but clearly, that also creates risk as you're delegating control of traffic flow to someone else. Fortunately, Cloudflare gives you a load of configurability with their managed rules which makes it easy to add custom exceptions:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

This meant I could create a simple exception that was much more intelligent than the previous "just let all outbound Stripe IPs in" by filtering down to the specific path those webhooks were flowing in to:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

And finally, because sequence matters, I dragged that rule right up to the top of the pile so it would cause matching inbound requests to skip all the other rules:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

And finally, there were no more rabbits 😊

Lessons Learned

I know what you're thinking - "what was the actual root cause?" - and to be honest, I still don't know. I don't know if it was Cloudflare or OWASP or Stripe or if it even impacted other customers of these services and to be honest, yes, that's a little frustrating. But I learned a bunch of stuff and for that alone, this was a worthwhile exercise I took three big lessons away from:

Firstly, understanding the plumbing of how all these bits work together is super important. I was lucky this wasn't a time critical issue and I had the luxury of learning without being under duress; how rules, payload inspection and exception management all work together is really valuable stuff to understand. And just like that, as if to underscore my first point, I found this right before hitting the publish button on the blog post:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

I added a couple more OWASP rules to the exception in Cloudflare (things like a MySQL rule that was adding 5 points), and we were back in business.

Secondly, I look at the managed WAF Cloudflare provides more favourably than I did before simply because I have a better understanding of how comprehensive it is. I want to write code and run apps on the web, that's my focus, and I want someone else to provide that additional layer on top that continuously adapts to block new and emerging threats. I want to understand it (and I now do, at least certainly better than before), but I don't want managing it day in and day out to be my job.

And finally, IMHO, Stripe needs a better mechanism to report on webhook failures:

Waiting until stuff breaks really isn't ideal and whilst I'm sure you could plug into the (very extensive) API ecosystem Stripe has, this feels like an easy feature for them to build in. So, Stripe friends, when you read this that's a big "yes" vote from me for some form of anomalous webhook response alerting.

This experience was equal parts frustration and fun and whilst the former is probably obvious, the latter is simply due to having an opportunity to learn something new that's a pretty important part of the service I run. May my frustrated fun story here make your life easier in the future if you face the same problems 😊

Phishers are enjoying remarkable success using text messages to steal remote access credentials and one-time passcodes from employees at some of the world’s largest technology companies and customer support firms. A recent spate of SMS phishing attacks from one cybercriminal group has spawned a flurry of breach disclosures from affected companies, which are all struggling to combat the same lingering security threat: The ability of scammers to interact directly with employees through their mobile devices.

In mid-June 2022, a flood of SMS phishing messages began targeting employees at commercial staffing firms that provide customer support and outsourcing to thousands of companies. The missives asked users to click a link and log in at a phishing page that mimicked their employer’s Okta authentication page. Those who submitted credentials were then prompted to provide the one-time password needed for multi-factor authentication.

The phishers behind this scheme used newly-registered domains that often included the name of the target company, and sent text messages urging employees to click on links to these domains to view information about a pending change in their work schedule.

The phishing sites leveraged a Telegram instant message bot to forward any submitted credentials in real-time, allowing the attackers to use the phished username, password and one-time code to log in as that employee at the real employer website. But because of the way the bot was configured, it was possible for security researchers to capture the information being sent by victims to the public Telegram server.

This data trove was first reported by security researchers at Singapore-based Group-IB, which dubbed the campaign “0ktapus” for the attackers targeting organizations using identity management tools from Okta.com.

“This case is of interest because despite using low-skill methods it was able to compromise a large number of well-known organizations,” Group-IB wrote. “Furthermore, once the attackers compromised an organization they were quickly able to pivot and launch subsequent supply chain attacks, indicating that the attack was planned carefully in advance.”

It’s not clear how many of these phishing text messages were sent out, but the Telegram bot data reviewed by KrebsOnSecurity shows they generated nearly 10,000 replies over approximately two months of sporadic SMS phishing attacks targeting more than a hundred companies.

A great many responses came from those who were apparently wise to the scheme, as evidenced by the hundreds of hostile replies that included profanity or insults aimed at the phishers: The very first reply recorded in the Telegram bot data came from one such employee, who responded with the username “havefuninjail.”

Still, thousands replied with what appear to be legitimate credentials — many of them including one-time codes needed for multi-factor authentication. On July 20, the attackers turned their sights on internet infrastructure giant Cloudflare.com, and the intercepted credentials show at least five employees fell for the scam (although only two employees also provided the crucial one-time MFA code).

Image: Cloudflare.com

In a blog post earlier this month, Cloudflare said it detected the account takeovers and that no Cloudflare systems were compromised. But Cloudflare said it wanted to call attention to the phishing attacks because they would probably work against most other companies.

“This was a sophisticated attack targeting employees and systems in such a way that we believe most organizations would be likely to be breached,” Cloudflare CEO Matthew Prince wrote. “On July 20, 2022, the Cloudflare Security team received reports of employees receiving legitimate-looking text messages pointing to what appeared to be a Cloudflare Okta login page. The messages began at 2022-07-20 22:50 UTC. Over the course of less than 1 minute, at least 76 employees received text messages on their personal and work phones. Some messages were also sent to the employees family members.”

On three separate occasions, the phishers targeted employees at Twilio.com, a San Francisco based company that provides services for making and receiving text messages and phone calls. It’s unclear how many Twilio employees received the SMS phishes, but the data suggest at least four Twilio employees responded to a spate of SMS phishing attempts on July 27, Aug. 2, and Aug. 7.

On that last date, Twilio disclosed that on Aug. 4 it became aware of unauthorized access to information related to a limited number of Twilio customer accounts through a sophisticated social engineering attack designed to steal employee credentials.

“This broad based attack against our employee base succeeded in fooling some employees into providing their credentials,” Twilio said. “The attackers then used the stolen credentials to gain access to some of our internal systems, where they were able to access certain customer data.”

That “certain customer data” included information on roughly 1,900 users of the secure messaging app Signal, which relied on Twilio to provide phone number verification services. In its disclosure on the incident, Signal said that with their access to Twilio’s internal tools the attackers were able to re-register those users’ phone numbers to another device.

On Aug. 25, food delivery service DoorDash disclosed that a “sophisticated phishing attack” on a third-party vendor allowed attackers to gain access to some of DoorDash’s internal company tools. DoorDash said intruders stole information on a “small percentage” of users that have since been notified. TechCrunch reported last week that the incident was linked to the same phishing campaign that targeted Twilio.

This phishing gang apparently had great success targeting employees of all the major mobile wireless providers, but most especially T-Mobile. Between July 10 and July 16, dozens of T-Mobile employees fell for the phishing messages and provided their remote access credentials.

“Credential theft continues to be an ongoing issue in our industry as wireless providers are constantly battling bad actors that are focused on finding new ways to pursue illegal activities like this,” T-Mobile said in a statement. “Our tools and teams worked as designed to quickly identify and respond to this large-scale smishing attack earlier this year that targeted many companies. We continue to work to prevent these types of attacks and will continue to evolve and improve our approach.”

This same group saw hundreds of responses from employees at some of the largest customer support and staffing firms, including Teleperformanceusa.com, Sitel.com and Sykes.com. Teleperformance did not respond to requests for comment. KrebsOnSecurity did hear from Christopher Knauer, global chief security officer at Sitel Group, the customer support giant that recently acquired Sykes. Knauer said the attacks leveraged newly-registered domains and asked employees to approve upcoming changes to their work schedules.

Image: Group-IB.

Knauer said the attackers set up the phishing domains just minutes in advance of spamming links to those domains in phony SMS alerts to targeted employees. He said such tactics largely sidestep automated alerts generated by companies that monitor brand names for signs of new phishing domains being registered.

“They were using the domains as soon as they became available,” Knauer said. “The alerting services don’t often let you know until 24 hours after a domain has been registered.”

On July 28 and again on Aug. 7, several employees at email delivery firm Mailchimp provided their remote access credentials to this phishing group. According to an Aug. 12 blog post, the attackers used their access to Mailchimp employee accounts to steal data from 214 customers involved in cryptocurrency and finance.

On Aug. 15, the hosting company DigitalOcean published a blog post saying it had severed ties with MailChimp after its Mailchimp account was compromised. DigitalOcean said the MailChimp incident resulted in a “very small number” of DigitalOcean customers experiencing attempted compromises of their accounts through password resets.

According to interviews with multiple companies hit by the group, the attackers are mostly interested in stealing access to cryptocurrency, and to companies that manage communications with people interested in cryptocurrency investing. In an Aug. 3 blog post from email and SMS marketing firm Klaviyo.com, the company’s CEO recounted how the phishers gained access to the company’s internal tools, and used that to download information on 38 crypto-related accounts.

The ubiquity of mobile phones became a lifeline for many companies trying to manage their remote employees throughout the Coronavirus pandemic. But these same mobile devices are fast becoming a liability for organizations that use them for phishable forms of multi-factor authentication, such as one-time codes generated by a mobile app or delivered via SMS.

Because as we can see from the success of this phishing group, this type of data extraction is now being massively automated, and employee authentication compromises can quickly lead to security and privacy risks for the employer’s partners or for anyone in their supply chain.

Unfortunately, a great many companies still rely on SMS for employee multi-factor authentication. According to a report this year from Okta, 47 percent of workforce customers deploy SMS and voice factors for multi-factor authentication. That’s down from 53 percent that did so in 2018, Okta found.

Some companies (like Knauer’s Sitel) have taken to requiring that all remote access to internal networks be managed through work-issued laptops and/or mobile devices, which are loaded with custom profiles that can’t be accessed through other devices.

Others are moving away from SMS and one-time code apps and toward requiring employees to use physical FIDO multi-factor authentication devices such as security keys, which can neutralize phishing attacks because any stolen credentials can’t be used unless the phishers also have physical access to the user’s security key or mobile device.

This came in handy for Twitter, which announced last year that it was moving all of its employees to using security keys, and/or biometric authentication via their mobile device. The phishers’ Telegram bot reported that on June 16, 2022, five employees at Twitter gave away their work credentials. In response to questions from KrebsOnSecurity, Twitter confirmed several employees were relieved of their employee usernames and passwords, but that its security key requirement prevented the phishers from abusing that information.

Twitter accelerated its plans to improve employee authentication following the July 2020 security incident, wherein several employees were phished and relieved of credentials for Twitter’s internal tools. In that intrusion, the attackers used Twitter’s tools to hijack accounts for some of the world’s most recognizable public figures, executives and celebrities — forcing those accounts to tweet out links to bitcoin scams.

“Security keys can differentiate legitimate sites from malicious ones and block phishing attempts that SMS 2FA or one-time password (OTP) verification codes would not,” Twitter said in an Oct. 2021 post about the change. “To deploy security keys internally at Twitter, we migrated from a variety of phishable 2FA methods to using security keys as our only supported 2FA method on internal systems.”

Presently sponsored by: Cloudflare. Used by businesses small and large to protect and accelerate their web properties. Try DNS, DDoS protection and SSL for free.

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

How best to punish spammers? I give this topic a lot of thought because I spend a lot of time sifting through the endless rubbish they send me. And that's when it dawned on me: the punishment should fit the crime - robbing me of my time - which means that I, in turn, need to rob them of their time. With the smallest possible overhead on my time, of course. So, earlier this year I created Password Purgatory with the singular goal of putting spammers through the hellscape that is attempting to satisfy really nasty password complexity criteria. And I mean really nasty criteria, like much worse than you've ever seen before. I opened-sourced it, took a bunch of PRs, built out the API to present increasingly inane password complexity criteria then left it at that. Until now because finally, it's live, working and devilishly beautiful 😈

Step 1: Receive Spam

This is the easy bit - I didn't have to do anything for this step! But let me put it into context and give you a real world sample:

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

Ugh. Nasty stuff, off to hell for them it is, and it all begins with filing the spam into a special folder called "Send Spammer to Password Purgatory":

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

That's the extent of work involved on a spam-by-spam basis, but let's peal back the covers and look at what happens next.

Step 2: Trigger a Microsoft Power Automate Flow

Microsoft Power Automate (previously "Microsoft Flow") is a really neat way of triggering a series of actions based on an event, and there's a whole lot of connectors built in to make life super easy. Easy on us as the devs, that is, less easy on the spammers because here's what happens as soon as I file an email in the aforementioned folder:

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

Using the built in connector to my Microsoft 365 email account, the presence of a new email in that folder triggers a brand new instance of a flow. Following, I've added the "HTTP" connector which enables me to make an outbound request:

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

All this request does is makes a POST to an API on Password Purgatory called "create-hell". It passes an API key because I don't want just anyone making these requests as it will create data that will persist at Cloudflare. Speaking of which, let's look at what happens over there.

Step 3: Call a Cloudflare Worker and Create a Record in KV

Let's start with some history: Back in the not too distant past, Cloudflare wasn't a host and instead would just reverse proxy requests through to origin services and do cool stuff with them along the way. This made adding HTTPS to any website easy (and free), added heaps of really neat WAF functionality and empowered us to do cool things with caching. But this was all in-transit coolness whilst the app logic, data and vast bulk of the codebase sat at that origin site. Cloudflare Workers started to change that and suddenly we had code on the edge running in hundreds of nodes around the world, nice and close to our visitors. Did that start to make Cloudflare a "host"? Hmm... but the data itself was still on the origin service (transient caching aside). Fast forward to now and there are multiple options to store data on Cloudflare's edges including their (presently beta) R2 service, Durable Objects, the (forthcoming) D1 SQL database and of most importance to this blogpost, Workers KV. Does this make them a host if you can now build entire apps within their environment? Maybe so, but let's skip the titles for now and focus on the code.

All the code I'm going to refer to here is open source and available in the public Password Purgatory Logger Github repo. Very early on in the index.js file that does all the work, you'll see a function called "createHell" which is called when the flow step above runs. That code creates a GUID then stores it in KV after which I can easily view it in the Cloudflare dashboard:

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

There's no value yet, just a key and it's returned via a JSON response in a property called "kvKey". To read that back in the flow, I need a "Parse JSON" step with a schema I generated from a sample:

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

At this point I now have a unique ID in persistent storage and it's available in the flow, which means it's time to send the spammer an email.

Step 4: Invite the Spammer to Hell

Because it would be rude not to respond, I'd like to send the spammer back an email and invite them to my very special registration form. To do this, I've grabbed the "Reply to email" connector and fed the kvKey through to a hyperlink:

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

It's an HTML email with the key hidden within the hyperlink tag so it doesn't look overtly weird. Using this connector means that when the email sends, it looks precisely like I've lovingly crafted it myself:

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

With the entire flow now executed, we can view the history of each step and see how the data moves between them:

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

Now, we play the waiting game 😊

Step 5: Log Spammer Pain

Wasting spammer time in and of itself is good. Causing them pain by having them attempt to pass increasingly obtuse password complexity criteria is better. But the best thing - the pièce de résistance - is to log that pain and share it publicly for our collective entertainment 🤣

So, by following the link the spammer ends up here (you're welcome to follow that link and have a play with it):

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

The kvKey is passed via the query string and the page invites the spammer to begin the process of becoming a partner. All they need to leave is an email address... and a password. That page then embeds 2 scripts from the Password Purgatory website, both of which you can find in the open source and public Github repository I created in the original blog post. Each attempt at creating an account sends off the password only to the original Password Purgatory API I created months ago, after which it responds with the next set of criteria. But each attempt also sends off both the criteria that was presented (none on the first go, then something increasingly bizarre on each subsequent go), the password they tried to use to satisfy the criteria and the kvKey so it can all be tied together. What that means is that the Cloudflare Workers KV entry created earlier gradually builds up as follows:

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

There are a couple of little conditions built into the code:

  1. If a kvKey is passed in the log request that doesn't actually exist on Cloudflare, HTTP 404 is returned. This is to ensure randos out there don't attempt to submit junk logs into KV.
  2. Once the first password is logged, there's a 15 minute window within which any further passwords can be logged. The reason is twofold: firstly, I don't want to share the spammers attempts publicly until I'm confident no more passwords can be logged just in case they add PII or something else inappropriate. Secondly, once they know the value of the kvKey a non-spammer could start submitting logs (for example, when I tweet it later on or share it via this blog post).

That's everything needed to lure the spammer in and record their pain, now for the really fun bit 😊

Step 6: Enjoy Revelling in Spammer Pain

The very first time the spammer's password attempt is logged, the Cloudflare Worker sends me an email to let me know I have a new spammer hooked (this capability using MailChannels only launched this year):

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

It was so exciting getting this email yesterday, I swear it's the same sensation as literally getting a fish on your line! That link is one I can share to put the spammer's pain on display for the world to see. This is achieved with another Cloudflare Workers route that simply pulls out the logs for the given kvKey and formats it neatly in an HTML response:

Sending Spammers to Password Purgatory with Microsoft Power Automate and Cloudflare Workers KV

Ah, satisfaction 😊 I listed the amount of time the spammer burned with a goal to further refining the complexity criteria in the future to attempt to keep them "hooked" for longer. Is the requirement for a US post code in the password a bit to geographically specific, for example? Time will tell and I wholeheartedly welcome PRs to that effect in the original Password Purgatory API repo.

Oh - and just to ensure traction and exposure are maximised, there's a neatly formatted Twitter card that includes the last criteria and password used, you know, the ones that finally broke the spammer's spirit and caused them to give up:

Summary

Clearly, I've taken a great deal of pleasure in messing with spammers and I hope you do too. I've gotta be honest - I've never been so excited to go through my junk mail! But I also thoroughly enjoyed putting this together with Power Automate and Workers KV, I think it's super cool that you can pull an app together like this with a combination of browser-based config plus code and storage that runs directly in hundreds of globally distributed edge nodes around the world. I hope the spammers appreciate just how elegant this all is 🤣

A downtime caused on the network of Cloudflare was claimed to be a cyber attack by some online news resources. But the cloud service giant responded by giving a press statement that the disruption caused on Monday this week was because of a network change and not by digital attack.

CloudFlare downtime caused some of the big companies like Discord, DoorDash, NordVPN, Zerodha, Upstox to lose network access for a couple of hours. Though the service was restored by the network infrastructure provider at the earliest, it led to connection issues to several customers as their ISPs failed to catch the network change.

Coinbase and Shopify customers that were using a different network service provider also suffered a downtime in parallel with CloudFlare. And it was then that the social media platforms such as Facebook and Twitter buzzed with disinformation that the downtime could have been caused by cyber attacks launched by Russian federation as it announced a digital war on the west long back, as the latter was supporting Ukraine by supplying arms, ammunition, finances and essentials such as food, fuel and satellite-based internet.

Thus, to shut down the gossip mongers, Cloudflare issued a public statement that some of its data centers went for a network change and a technical glitch caused the downtime.

NOTE 1– Cloudflare started as a content delivery network service provider and then jumped into the business of Ddos mitigation. Founded in the year 2010, it is now acting as a hosting services provider for customers across America and the world.

NOTE 2- When Russian started an invasion in Ukraine, CloudFlare refused to join the treaty of companies that announced withdrawal of support from the Putin led nation for waging war with Ukraine. In May 2022, it clarified that it will defy the demands of exit from business and will neither support Moscow in its war with Volodymyr Zelensky led nation.

 

The post Cloudflare clarifies network change and not Cyber Attack appeared first on Cybersecurity Insiders.