Google Search Console for WordPress: 5 Tips to Improve SEO

Have you ever found yourself using a tool for a long time, yet feeling you're not tapping into much of its power? We've all felt like we were looking for a needle in a haystack, but there's nothing more satisfying than finally finding it and understanding its full potential. You've probably heard of Google’s free SEO tool, Google Search Console. But are you sure you know all that you need to know? 

WordPress-based websites are no different from any other website on the internet, but when it comes to optimization for search, there might be some insights and tools in GSC that you're unaware of.

What is Google Search Console?

Google Search Console is Google’s free web service that allows webmasters to monitor and maintain their site's presence in Google Search results (SERP). If you're looking to grow your organic traffic and improve over time, Google Search Console is a must-have. Besides in-depth insights about your website’s organic performance, the platform provides a lot more information about Off page SEO and Technical SEO. That includes how Google bots crawl and index your site, highlighting issues that may be preventing pages or content from being indexed.

Like many other Google tools, Google Search Console has changed over time, with some (actually useful) features being deprecated, but with others being added as well. However, the process of adding a website property to the console has remained unchanged for a long time. If you haven't done it yet, follow this step by step guide.

In this guide you’ll learn how to use the potential of the entire platform to improve your SEO and not only that - you’ll get insights that are specific for WordPress websites. Let’s dive in.

Tip 1: Track Keywords and Queries in GSC

When it comes to organic traffic performance, one of the most frequently visited dashboards is Search Results in Performance section. Here you can quickly analyze your Impressions, Clicks, Click Through Rate, and Average positions. 

The 'New' button allows you to filter results based on what you are looking for. 


Depending on what you choose, results will appear below in the table view. 

If you wish to export results, you can do so in different formats, such as .csv, Google Sheets, or Excel. 

Use the Query report in GSC to see which keywords and questions people are using to find your site in Google. You can filter queries by date range, country, device type and more. This helps you identify relevant search queries to optimize for.

If you're new to this, watch "Intro to Google Search Console" where Daniel Waisberg covers the basics to help you succeed on Search. 

Click to play

Write Regular Expressions and ChatGPT to Analyze Queries

Now, back to queries analysis. There is a great way to analyze queries based on a desired pattern, and to do so, I use Regular Expressions. Regular Expressions, or Regex is a pattern that specifies a set of strings. 

Using it, you can filter different types of queries based on their nature, such as:

  • Questions
  • Long tail keywords
  • Queries containing a specific word, and many other ways. 

This provides further insight into user search behavior. Creating Regex was once reserved for people who were highly skilled in it, but today you can just use ChatGPT to generate it on your own. 

There is just one trick. If you’d go to ChatGPT and simply ask it to create a specific regex to filter data in your GSC property, it probably wouldn’t work. The reason for this is that Google Search Console accepts only the RE2 regex, so you need to specify this in your prompt. 

To filter data in your GSC, click Query >> Custom Regex

Now, let’s see how to create a prompt in GSC to get traffic insights for long tail keywords.

And, there is Refgex: 

Here is the result: 

Use the same approach to filter queries containing synonyms, brand name, or whatever makes sense to you. When you are not sure how to form your question, just create a prompt in ChatGPT. How cool is that? 

Tip 2: Prevent Unnecessary URLs from Crawling

WordPress based websites are very suitable for optimisation for search in both technical and content terms. It just depends on how deep  you need to go to make it work. Sometimes it’s necessary to hire a developer to make customisations, especially for ecommerce sites, but - it’s all possible. 

On the other hand, there are some settings that can be made by anyone, even for people who are not familiar with coding, and I am going to show you exactly how to do it.

For instance, some URLs should not be in index, and the way to prevent it from indexing is using a robots.txt file.

Robots.txt file is a standard used by websites to communicate with web crawlers and other web robots. The file specifies which parts of the website should not be processed or scanned by the robots. By using the robots.txt file, site administrators can provide directives about which robots can or cannot access specific files or directories.

Some key points about the robots.txt file:

Location: The robots.txt file must be located at the root of the website host to which it applies. For example, if the website is https://example.com, the robots.txt file should be located at https://example.com/robots.txt. For WordPress websites, robots.txt file is located in the website's root directory. 

User-agent directive: This specifies which robot the following rule applies to. A * can be used as a wildcard to apply to all robots.

User-agent: *

Disallow directive: This specifies which URL path the robot should not access. For instance:

Disallow: /wp-admin/

This means that the robot should not access any page under the  /wp-admin/ directory.

Allow directive: This is used to allow a robot to access a specific URL path, even if a broader rule disallows it. However, not all robots support this directive.

Crawl-delay directive: Some robots support a Crawl-delay directive, which indicates how many seconds a robot should wait between successive requests. For instance:

Crawl-delay: 10

Sitemap: You should specify the location of your website's sitemap in the robots.txt file to help search engines find it:

Sitemap: https://www.outreachmama.com/sitemap_index.xml

It's important to note a few things:

  • The robots.txt file is a request, not a command. Well-behaved robots will obey the directives, but there's no guarantee that all robots will do so.
  • The file does not prevent the listed URLs from being indexed by search engines, it only prevents crawling. If a URL is linked from other websites, it can still be indexed.
  • For more important directives, relying solely on robots.txt is insufficient. It's better to use proper mechanisms (noindex tag).

Submit your Robots.txt File and Request Indexing in GSC

You can  create and edit your WordPress robots.txt file using SEO plugins. The one that does a great job is RankMath. All you need to do is go to RankMath SEO → General Settings: 

If you change the file and want to update it more quickly than is occurring, you can submit your robots.txt url to Google.

Disable /feed/ From Crawling

This is something you will first need to address in the section Indexing >> Pages. Once you understand what you search for and why, you will be ready to use the power of a robots.txt file. 

In WordPress, a "feed" is a type of specialized content delivery mechanism that provides frequently updated content in a structured format, usually XML. Feeds are primarily used by feed readers, news aggregators, and other tools to fetch and display your site's content in real-time. The most common type of feed format used in WordPress is the Really Simple Syndication (RSS) format.

The "feed URL" is the web address where this feed can be accessed.

URL: https://example.com/feed/ - This provides the RSS feed for your site's main content. 

Feed URLs are created for RSS feed crawlers and readers, not for humans. They are only basic code versions of your actual content pages, so you do not want these pages to be indexed by Google bots. Google doesn't like them and would most not likely not show them to users.

To analyze this and identify if your /feed/ URLs are accessible for Google, navigate to Indexing >> Pages section. Such URLs are usually in the group “Crawled - currently not indexed” or “Discovered - currently not indexed”.

If there is a need for it, you can disallow such URLs from being crawled by adding a command in robots.txt tag: 

Disallow: /feed/

Tip 3: Identify Indexing Issues in the Crawling Report

Another invaluable dashboard worth understanding is Crawling Report. This is related to the technical performance of your website, so even if you didn’t know about it, this is not a big deal.

Crawling Report provides all necessary information about crawling requests and potential issues Google bots might have while accessing your pages. You can find it in the section Settings >> Crawling Report. 

In this report, you can learn about how your server handles crawling requests and if there is something to worry about. 

Fail rate Graph shows if your Server Connectivity had problems in the past.

The example above shows server related issues that could negatively affect the crawling rate of your website. If Google bots identify server overload issues, they will decrease crawling rate, which means less visits to your pages. You don’t want this to happen, so make sure to check these reports often, and if you identify an issue, report the problem to your hosting provider, and eventually consider changing or upgrading your server.

Crawl Request Breakdown

Regularly monitoring the "Crawl Request Breakdown" page in GSC can help you identify potential issues with your site, understand how Google perceives your content, and make informed decisions to optimize your site for search. Let’s see what each of these sections mean: 

Crawl Request Breakdown helps you understand how Google perceives your website structure and content types.

Total Crawl Requests:

  • This section provides an overview of the total number of requests made by Googlebot over a specified period, and it’s a good indicator of how actively Google is trying to understand and index your content.

By Response:

  • This section categorizes crawl requests based on the HTTP response code received by Googlebot.
  • Common categories include "200 OK" (successful requests), "404 Not Found" (broken links or removed pages), "5xx" (server errors), and so on.
  • Monitoring this section helps you identify potential issues with your site, like broken pages or server problems.

By Purpose:

This categorizes crawl requests based on why Googlebot is making the request. Common purposes include:

  • Refresh: Googlebot trying to update its index with the latest version of your content.
  • Discovery: Googlebot exploring new pages or URLs it hasn't seen before.

This section helps you understand the intent behind Google's crawl activity on your site.

By Googlebot Type:

This breaks down crawl requests by the type of Googlebot making the request. Different Googlebots crawl for different reasons, such as:

  • Desktop: Googlebot for traditional desktop-based search.
  • Mobile: Googlebot for mobile search.
  • Image: Googlebot specifically for crawling images.
  • Video: Googlebot for crawling video content.

Monitor this section to understand how your content is being accessed and can inform decisions around mobile optimization, multimedia content, etc.

By File Type:

  • This section categorizes crawl requests based on the type of file being requested, such as HTML, JavaScript, CSS, images, etc. It's essential to ensure that Googlebot can access and understand crucial resources, especially if your site relies heavily on JavaScript for rendering.

For each of the sections mentioned above, there's usually a "Details" link or button that provides a more granular view of the data, often showing specific URLs, response codes, or other detailed information. This detailed view can be invaluable for troubleshooting specific issues or understanding specific crawling behaviors.

Tip 4: Track Top Referring Sites and Linking Domains

SEO strategy requires careful navigation across multiple dimensions. While investing time and resources in content creation and technical optimization is pivotal, ensuring the quality and relevance of your backlinks is equally critical. Neglecting this aspect can expose your website to the risks of negative SEO, where malicious backlinks or irrelevant backlink profiles can harm your site's reputation and ranking.

Google Search Console keeps a record of your backlinks and anchor text but also provides insights into their quality and relevance. To explore your top backlinks, go to Links >> Top Linking Sites. For insights on the most commonly used anchor text, navigate to Links >> Top Linking Text. 

Monitor changes over time to identify new link building opportunities or issues with existing links. 

These metrics are essential for Off-Page Optimization. A clean, natural, and relevant backlinks not only safeguard against negative SEO but also strengthens your Domain Rate, paving the way for increased organic traffic.

While building backlinks, it's not just about quantity; it's the quality and relevance that truly matter. In the world of SEO, sometimes, less can be more, especially if those fewer links are high-quality and pertinent to your niche.

Tip 5: Remove Unwanted URLs with the URL Removal Tool

Speaking of negative SEO, there is a GSC tool that you might find extremely helpful. Negative SEO is an intentional, negative action against your website, where someone (typically your competitors) sends a load of toxic and irrelevant backlinks to your website. Usually, it’s combined with irrelevant or dangerous anchor text, such as exact match keywords that Google may find manipulative and unnatural. In such situations, you need to make sure to remove them from your database.  Use GSC's URL removal tool to permanently deindex low-quality, irrelevant, or outdated pages. This helps refine your indexation and improves SEO.

Be cautious with this tool and only include toxic backlinks in the disavow file.

Conclusion

As the online world keeps changing, it's good to stay updated. Now that you know all the tips and tricks, I hope It helps you find what needs to get better and keeps your site growing organically. With GSC, you can keep an eye on your keywords, check your links, and fix any website errors. Use it to keep your website on track and stay ahead online.

Author: Nevena Korac, SEO Specialist at OutreachMama