This module provides a single class, RobotFileParser, which answers
questions about whether or not a particular user agent can fetch a URL on
the Web site that published the robots.txt file. For more details on
the structure of robots.txt files, see
http://info.webcrawler.com/mak/projects/robots/norobots.html.
Returns the time the robots.txt file was last fetched. This is
useful for long-running web spiders that need to check for new
robots.txt files periodically.