Indexing as seen by the crawler
You may choose one of the identification options for our Web-crawler* (search robot), which does indexing of your website:
- Standard browser – crawler uses this option by default and is a recommended one. Your website will load the same way your regular visitors see it.
- YandexBot – this option is used to index your website as Yandex search robot sees it. Our crawler will be signed as the main Yandex indexing robot (YandexBot/3.0)
- Googlebot – this option is used to index your website as Google search robot sees it. Crawler will be signed as Google web-search robot Google (Googlebot/2.1)
- Mysitemapgenerator – use direct identification of our robot if you need separate control settings and an ability to manage website access
Pay attention to the features of robots.txt file processing when choosing different identification ways:
* The feature is provided as User agent and is a subject to paragraphs 1.3, 1.4 of Public Offer.
- When choosing «YandexBot» or «GoogleBot» options only instructions for a particular robot are considered (User-agent: Yandex or User-agent: Googlebot – respectively). General instructions of User-agent: * sections will be used only when «personal» ones are missing.
- If you are using «Standard browser» or «mysitemapgenerator.com» - crawler will consider only instructions in general section of User-agent: *. «Personal» sections of User-agent: Yandex or User-agent: Googlebot and others are not considered.