Finding ID | Version | Rule ID | IA Controls | Severity |
---|---|---|---|---|
V-2260 | WG310 IIS7 | SV-32333r2_rule | ECLP-1 | Low |
Description |
---|
Search engines are constantly at work on the Internet. Search engines are augmented by agents, often referred to as spiders or bots, which endeavor to capture and catalog web-site content. In turn, these search engines make the content they obtain and catalog available to any public web user. Such information in the public domain defeats the purpose of a limited or Certificate-based web server, provides information to those not authorized access to the web-site, and could provide clues of the site’s architecture to malicious parties. |
STIG | Date |
---|---|
IIS 7.0 WEB SITE STIG | 2014-01-09 |
Check Text ( C-32739r3_chk ) |
---|
1. Open the IIS Manager. 2. Click the site name under review. 3. If the Search Engine Optimization option exists, then this is a finding. 4. Click the View Content tab. 5. Open the robots.txt file. 6. Ensure the following entry exists in the robots.txt file: User-agent: * Disallow: / If the robots.txt file does not exist or the entry above is not contained in the robots.txt file, this is a finding. |
Fix Text (F-29066r3_fix) |
---|
1. Open the IIS Manager. 2. Click the site name under review. 3. Remove the Search Engine Optimization option. 4. Add a robots.txt file to the web-site root directory containing the lines: User-agent: * Disallow: / |