The following tips are mainly focused on the technical side of SEO. Unlike before, this chapter deals with the general technical side of SEO, however not necessarily focused on performance issues.
Switch to HTTPS
As we heard multiple times now, Search Engines want to make sure their users have the best possible experience during their search. Hence, user experience and security are two of the main drivers when it comes to ranking high. Search engines are obsessed with user experience and security.
You might remember a major hype from 2019 when Chrome started flagging non-HTTPS websites and marked them as non-secure. So in case, your site doesn’t run on HTTPS, your users receive a notification that it might not be secure. A nightmare for the site owner!
Basically, the algorithm works as follows: If 2 websites are (close to) identical in terms of other parameters, Google will rank the one running on HTTPS higher.
But be careful: A common mistake is for people to transfer the site to HTTPS, but forgetting to set up a 301 redirect from the http to the HTTPS version. This means they basically end up with two websites: One running on http and one running on HTTPS. This is a serious problem as Google now sees two separate websites instead of one!
Check for one single WWW version
Similar to the previous paragraph, always make it very clear which URL version your website runs on! There are two versions, ones including the “www” and one skipping it.
https://domain.com – without “www”(recommended)
https://www.domain.com (also functionable)
Always make sure to be consistent with it when it comes to link building and social networks. If the “www” is not needed, don’t mention it. For example, if your website loads without the “www”, then it would be weird and unnecessary to use the following format: https://www.domain.com across your social networks.
Avoid mixed content
What is mixed content?
Mixed content refers to the mix of secure and non-secure resources found on a webpage. This happens whenever a secure webpage attempts to access resources, such as images, or CSS, that are not seen as secure. Sounds abstract – why is this a problem?
Well, Google announced that future versions of Chrome will block mixed content errors.
This in return means that Chrome will simply not load these aspects of your website properly, resulting in potentially severe damages to the visual side of your page.
An example: Let’s say your site runs on HTTPS, but for some reason loads a CSS file on http (seen as non-secure). That’s when you created a mixed content error.
This issue commonly happens when you use an old theme or outdated plugins. Therefore, always make sure that your themes and plugins are up-to-date and come from trusted sources only.
Create an XML Sitemap Manually & Add it to Google Search Console (GSC)
According to Google, “a sitemap is a file where you provide information about the pages, videos, and other files on your site, and the relationships between them. Search engines like Google basically read this file to more intelligently crawl your site.”
A sitemap essentially provides the search engine with a more in-depth idea about the structure of your site. This in return, helps the Search Engine understand which pages they should crawl and do so faster and better.
In case you’re using SEO plugins like Yoast or RankMath your sitemap is automatically generated.
Create an XML Sitemap with WordPress / Yoast & Add it to Google Search Console (GSC)
Create a robots.txt File
Robots.txt, also known as the so-called “robots exclusion protocol” is a file that contains instructions for crawling spiders. It essentially tells the Search Engine what pages to crawl and what pages NOT to crawl.
Every Googlebot has a maximum “crawling budget” ( meaning how many URLs the Googlebot can crawl). Hence, it’s crucial to guide the spider to make sure it crawls the most important pages and neglects the ones that are not as important anyways (attachment pages, sometimes tags, etc) and not vice versa!
Now, let’s do a little experiment:
Try to open your own robots.txt file by accessing https://yourwebsite.com/robots.txt
Now, does this open the file? Yes? Fantastic, this means you have a robots.txt in place for your website. If not, simply create one and drop it into the root of your website, via FTP.
Here’s what you can do with the robots.txt file:
- Sculpt the crawling budget by eliminating unnecessary pages from the crawl
- Prevent spiders from accessing “private” pages and showing them in search results.
This refers to “membership” or “thank you pages” that are of secondary importance
Consolidate duplicate URLs with Canonical tag
Maybe you have noticed it before when scrolling through the internet: The same URL is mentioned twice, one time with and one time without the trailing slash.
Here’s an example:
- https://site.com/the-seo-checklist/
- https://site.com/the-seo-checklist – without the trailing slash
As mentioned before this basically means that Google sees two different pages – for the same URL. When it comes to ranking your page you will want to avoid this at all costs.
How to fix it?
First, check out that the canonical URL is set to “/” – at this point, if the canonical is already set to “/” then you’re fine.
To fix this issue once and for all, simply do a 301 redirect from the URL without a slash to the one with the slash.
Check and Fix Broken Links
Especially after blogging or running a website for years, it is relatively likely that you have a couple of broken links flying around. A broken link is a reference to another page that is no longer existent. If you or one of your users try to access a page like this, it will return a 404 error.
Depending on how many links you have, you might now say “How do I quickly identify them without clicking on every single link on my website?” No worries – there are crawling tools to help you out with this. Two recommended tools are Sitebulb and Screaming Frog SEO Spider (free up to 500 URLs). Go check them out!
While there is – as always – a WordPress plugin for the WordPress users, this is not recommended as it will create some performance issues due to its search method. It basically requires many resources to conduct the search which in return might cause the server to go down, especially if you don’t have a high-performance web host.
For more details, check here for 5 ways on how to best check for broken links, showcased by Kinsta.
Check and Fix Redirect Chains
Whenever two or more redirects are happening at the same time, it is called a redirect chain. Here is an example:
Page A -> Page B -> Page C -> Page D.
Why should we try to break redirect chains?
- it increases crawling efficiency
- it reduces page loading times
- it improves link equity, which means that the Page D gets more “link power” which eventually results in a higher ranking
The easiest way to fix these and breaking the chain is by simply redirecting Page A to Page D via a 301 redirect.
As mentioned in the previous paragraph, it is recommended to use a crawling tool – again we suggest Sitebulb or Screaming Frog SEO Spider.
Fix Missing and Duplicate Meta Titles and Descriptions
As mentioned before, the meta title and description are the information that is shown in search results. For many users, this little piece of text will be the first touchpoint with your website – eventually deciding if someone clicks on your page or not.
Sometimes it happens – either by accident or because of the CMS your website is built on – that this metadata goes missing or you end up with duplicates. Of course, this is not ideal and hence should be regularly checked.
Similar to the two previous paragraphs, you can use a crawling tool such as SiteBulb or Screaming Frog to fix this. On top, here’s a good tutorial on how to avoid these issues at all.
Use Redirects Properly
There are multiple different redirect types that can be used in various scenarios. The most important ones related to SEO are…
- …the 301 redirect – a permanent redirect
- …the 302 redirect – a temporary redirect
- …the 410 redirect – this page doesn’t exist anymore
To give you a better idea about when to use which redirect type, you will find some use cases below, together with the matching redirect type:
Scenario 1: You want to consolidate page A with the content of page B -> Use a 301 redirect from B to A
Scenario 2: You want to create a new page with updated information, but you already have a page B treating the same subject -> Use a 301 redirect from B to A
Scenario 3: You want to transfer the link power from page A to B -> Use a 301 redirect from A to B
Scenario 4: You want to delete a page and mark it as “gone” from the internet -> Use a 410 redirect
Now we have talked about the different redirect types and use cases but how do we actually create these redirects? While you could do it manually, it’s easier and faster to do so via the .htaccess file. Or, when you work with WordPress, simply use a plugin.
Here are a couple for you to consider:
Check for Index Coverage Issues
To check your website for index coverage issues, simply go to your Google Search Console property and analyze the Coverage tab under the Index section.
Now, what should you look for?
The tools reports issues such as:
- drop in total indexed pages without corresponding errors (this could mean you might be blocking the access to some pages, e.g. via robots.txt or by noindex-ing them)
- error spikes
- missing pages
- server errors
- 404 errors
All this information helps you to get a good overview and make educated SEO decisions.
Check and avoid Manual Penalties
Have you ever heard about getting a “manual penalty” from Google? It means that an actual human being reviews your website after an automated process has flagged it as suspicious. And at the end of the day, if that Google employee concludes that your site is indeed shady, it means your site is banned from the search results and does not show up anymore. This is probably the worst possible scenario for your SEO efforts.
Here’s how the cause-and-effect chain looks like:
- You (maybe even unintentionally) do some shady SEO activities for your website
- Google algorithms catch you as your website triggers a certain amount of red flags
- A real person manually checks the site against the red flags and if you are being caught, you get the manual penalty.
The consequence is that your site is removed from the search results. The good news is, that the chances of being blocked from Google are relatively low. But what can you do if it happens? A major indicator is that your organic search traffic suddenly drops out of nowhere. In this case, check the Google Search Console for more information, fix the issues, and resubmit your website again.
Keep Pages at Maximum 3 Clicks from Reach
In general, Google assumes that content that is buried deep in your site is less important than pages that are easily accessible. Therefore, keep your relevant content within reach – ideally not more than 3 clicks away!
This will help Google’s bots to crawl your site more efficiently and increases the chances of having all your important pages properly indexed – a primary condition for all SEO efforts.
Research shows that the optimal crawl depth for a website is three levels or less.
Here are a couple tricks about how to increase the accessibility of your pages:
- optimize internal linking
- place the most important URLs in the top levels of a site’s architecture
- use breadcrumbs to allow users and search engines to easily navigate through your pages.
Use “noindex” and “nofollow” Tags Properly
Google offers different tags for us to categorize and describe our pages. The search engine does this in order to be better educated about which pages we want to be indexed, followed or not accessible.
There are three important tags out there that everyone should know about:
- Noindex: tells the search engine to exclude your page from the search results
- Disallow: tells it not to crawl your pages at all
- Nofollow: tells it not to follow the links on your page
So the big question is, why and when should we use each tag?
Well, the number one goal should be to optimize your crawling budget. Just imagine you have a complex website with over 1000 individual pages and 800 of these 1000 pages are generated by tags. In this case, you tag these pages with a noindex tag which results in your site being crawled more efficiently. WordPress and pretty much any decent CMS or your SEO solution (such as Yoast or RankMath) gives you options for such a task.
Disable Irrelevant Pages from Being Indexed (noindex)
Let’s think a step further: Google sets a limited crawling budget for each website and probably won’t crawl all your 1000 pages in detail. Your goal is to have the most important sites crawled while the not-so-important ones are of lower priority. What you want to avoid is that Google crawls the wrong, irrelevant pages while leaving the important ones out!
“The ranking strength of a website is the average ranking strength of its pages.”
Keep that in mind and go through every page one by one asking yourself: “Does this page have good chances to rank for the keyword that it’s targeting?” There comes a number of automatically generated pages with every CMS, which – from an SEO point of view – shouldn’t be indexed. With WordPress, they usually are:
- tags
- attachments
- custom post types pages
- custom post types categories
- custom post types tags
- author pages
The simplest way of disabling these in WordPress is by using Yoast SEO or Rank Math.