From what I understand, this is likely a headless DOM emulator (HappyDOM), used for scraping or automated testing.
Here’s what I’ve tried so far:
Created a Firewall rule that matches the User-Agent containing “HappyDOM” and set it to Deny – but I kept seeing new requests marked as Allowed.
Tried again with a different rule using a partial User-Agent match and Deny – no effect.
Most recently, I added a rule using JA4 Digest, matching the fingerprint of these requests and also set it to Deny – but I’m still seeing requests from the same User-Agent in my logs, with status “Allowed”.
So far, nothing seems to stop these requests from going through.
My questions:
Is there a reason why these requests still appear as Allowed even after being matched by Deny rules?
How can I ensure that these requests are fully denied and do not hit my site at all?
For context, my site is a mostly static Next.js project using SSG (static site generation). The only dynamic part is a single API route that handles search queries. I’m hosting it on Vercel (Hobby plan) with a custom domain. The site hasn’t been submitted to Google or any search engines, so I wasn’t expecting any crawler or bot activity at this stage. That’s why these repeated HappyDOM requests are concerning.
Mozilla/5.0 (X11; Linux x64) AppleWebKit/537.36 (KHTML, like Gecko) HappyDOM/0.0.0 is an User-Agent not JA4 digest, so you need to set the Rule accordingly
I had a JA4 Digest rule earlier but removed it after your clarification.
I haven’t re-added the User-Agent rule yet, because when I previously tried blocking by HappyDOM in the User-Agent string, the requests were still marked as Allowed.