How to Block OpenAI’s Web Crawler from Crawling Your Site

Roger Stringer
August 10, 2023
1 min read

Just as you can use robots.txt to block search engines and various bots from crawling your site, you can do the same with OpenAI's GPTBot.

GPTBot is OpenAI’s web crawler and can be identified by the following user agent and string.

1User agent token: GPTBot
2Full user-agent string: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.0; +

To block GPTBot from crawling your site entirely, you can add the following to your robots.txt file:

1User-agent: GPTBot
2Disallow: /

If you want to allow GTPBot to crawl certain areas of your site, but block it from other areas, you can do this:

1User-agent: GPTBot
2Allow: /directory-1/
3Disallow: /directory-2/

Do you like my content?

Sponsor Me On Github