How to Stop Search Engine Spider from Crawling Specific Part on A Site?

How to Stop Search Engine Spider from Crawling Specific Part on A Site?

The robots.txt file can be used to stop a search engine spider from crawling all or part of your site. Create a robots.txt file in the main folder of your site which will direct the search engine spiders what they may and may not search. Spiders generally look for this file before doing anything.

Following are some examples:

# Indicates that no robots should visit this site further. The asterisk wildcard means “All”:

User-agent: *

Disallow: /

# Indicates that no robots should visit any URL which starts with “/eukhost/”, except the robot called “googlebot”:

User-agent: *

Disallow: /eukhost/ # This is an eukhost virtual URL space

User-agent: googlebot

Disallow:

# Indicates ban Google from searching both eukhost.htm and the /gallery folder.

User-agent: googlebot

Disallow: eukhost.htm

Disallow:/gallery

If you are looking for help with SEO, check out our all-in-one SEO tool, designed to make optimisation more effective and less of a burden.

Sharing