Did you get an email from Google Search Console, telling you Googlebot cannot access CSS and JS files on your site?
If you have a WordPress site with a
robots.txt file, this may be the cause.
Up until July 28th, 2015, it was common for Google to crawl WordPress sites with a
robots.txt file that blocked certain directories from user-agents (browsers) as a security precaution. Starting today, Google is requesting they be able to crawl CSS and JS files for your WordPress site, so they can fully render the page, and make sure it is functioning correctly.
Not every WordPress site has a
robots.txt file. These are located in the root folder of the site or WordPress installation.
Many managed WordPress hosts and some security plugins include a
robots.txt. If you have a security plugin installed, you may also have to check to make sure you are not restricting Googlebot in your
.htaccess file (also at in the root folder).
Here’s an example of a
robots.txt file that restricts access to the
wp-includes folder, where many of your CSS and JS files reside.
# Robots file, acceptable before July 28, 2015 User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/
Here is an example of your
robots.txt file, with the restriction on the
wp-includes folder removed.
# Robots file, acceptable after July 28, 2015 User-agent: * Disallow: /wp-admin/
Yoast recommends not blocking the
wp-admin at all, as WordPress automatically does this on a page level, using the tag which looks like this:
<meta name='robots' content='noindex,follow' />>
The important thing to know is going forward, Google needs to be able to crawl your
To learn more about
robots.txt and test at Google’s Search Console Help.
You can also test your
robots.txt in your Google Search Console under Crawl > robots.txt Tester.