I’m experiencing issues with Googlebot accessing my robots.txt file. When Google tries to fetch /robots.txt, it returns Failed: Robots.txt unreachable
I’m hosting a static robots.txt file in the public/ directory of my Next.js app.
We have checked everything from our side and everything is okay. The website works fine and no changes have been made recently.
https://tabeer.ae/robots.txt is directly accessible from the browser but on Google Search Console it is not reachable. I am sharing some screenshots which show everything was okay before 6th April but after that robots.txt is not reachable by Google Search Console.
Hi @anshumanb thank you for the reply. I updated the robot.txt but it didnt make any difference. Still same when I try to do test live url on google search console it says robot.txt unreachable
@anshumanb It was working just fine from some time. Had no issues and also nothing has been changed as well. But all of a sudden from 6th april this is happening.
please help me resolve this.
Hi @it-tabeerae, thanks for the additional information. I’m still digging in to see what else we can try. Can you confirm that we don’t have any Firewall rules setup from your Vercel project settings?
Hey there! Thanks for reporting this. Sorry to hear you’re running into trouble with your robots.txt file in Google Search Console.
We looked into this internally and can confirm that Googlebot has been receiving valid 200 OK responses for /robots.txt — in fact, we’ve served 18 successful requests to Googlebot in just the last 12 hours.
That said, we did notice one thing: the only non-200 response was a 308 redirect from your raw domain (e.g. yourdomain.com) to www.yourdomain.com. While Google typically handles redirects well, it’s possible this redirect is causing a hiccup in how Googlebot is accessing your robots.txt.
To help us dig a little deeper, could you let us know:
Which version of your domain is added to Google Search Console — www or the root domain?
Is your robots.txt file accessible directly from both domain.com/robots.txtandwww.domain.com/robots.txt?
If you haven’t already, try using Google’s robots.txt Tester to see how it renders your file from their end.
So 3 days ago i fixed the issue in google search console. i uploaded the sitemaps again and it fetched
i changed the robots.txt a bit .. like i removed the www from the sitemap links in robots.txt and everything was fine after that.
@anshumanb@pawlean
Hi, can you please help resolve this issue because I dont see anything wrong on our side as everything is same as before, and it was working before
Hi @it-tabeerae, after speaking to the team internally we couldn’t find any requests that we declined. It could be some transient issues with Googlebot and your domain.
Hi @it-tabeerae, thanks for sharing the update. I tried the same website you shared but I get a “green” result with no issues.
Regardless, I think you can use the EdgeMiddleware to update the response header and set it to text/html or you can also use the vercel.json to update headers for the /sitemap.xml path.