How to stop search engine spider from crawling specific part on a site ?

How to stop search engine spider from crawling specific part on a site ?

The robots.txt file can be used to stop a search engine spider from crawling all or part of your site. Create a robots.txt file in the main folder of your site which will direct the search engine spiders what they may and may not search. Spiders generally look for this file before doing anything.

Following are some examples:

# Indicates that no robots should visit this site further. The asterisk wildcard means “All”:

User-agent: *
Disallow: /

# Indicates that no robots should visit any URL which starts with “/eukhost/”, except the robot called “googlebot”:

User-agent: *
Disallow: /eukhost/
# This is an eukhost virtual URL space

User-agent: googlebot
Disallow:

# Indicates ban Google from searching both eukhost.htm and the /gallery folder.

User-agent: googlebot
Disallow: eukhost.htm
Disallow:/gallery

Sharing

Leave your comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.