How to stop google bot from crowling my website?

Asked 22-Mar-2023
Viewed 406 times

0

Google uses its crawling bot to identify the content from your website and on the basis of your content it decides your ranking.


1 Answer


0

If you want to stop Googlebot from crawling your website, you can do so by using a robots.txt file. A robots.txt file is a simple text file that you place in the root directory of your website, which gives instructions to search engine crawlers about which pages or sections of your website should not be crawled or indexed.

To create a robots.txt file, follow these steps:

Open a text editor like Notepad or Sublime Text.

Type the following text at the beginning of the file:

User-agent: Googlebot Disallow: /

This will tell Googlebot not to crawl any pages on your site.

How to stop google bot from crowling my website

If you want to allow Googlebot to crawl certain pages, you can add additional rules to the file. For example, if you want to allow Googlebot to crawl all pages in a directory called "blog," you can add the following rule:

User-agent: Googlebot Disallow: /private/ Allow: /blog/

This will allow Googlebot to crawl all pages in the /blog/ directory, but will still block the /private/ directory.

Save the file as "robots.txt" and upload it to the root directory of your website using an FTP client or file manager.

It's important to note that not all search engines follow the robots.txt protocol, and some may ignore your instructions. Additionally, using a robots.txt file won't prevent your pages from being indexed if they are linked to from other sites. If you want to ensure that your pages aren't indexed, you may need to use other methods, such as password protection or adding a "noindex" tag to your pages.