Google announced in 2019 they stopped supporting rel=”next” and “prev” as an indexing signal (and hadn’t been using it for a while).
I did a test in April 2019 to see if rel=“next” and “prev” tags were used as a mechanism for Googlebot to discover new content. I suspected they never really used this as a discovery mechanism for crawling and did a test to see if the data supported this theory.
In this experiment I learned:
- Googlebot doesn’t use rel=next/prev for discovery crawling – The experiment showed that Googlebot does not use rel=”next” and “prev” tags as a discovery mechanism to find new pages when crawling a website..
- Crawlable links still matter – A crawlable link is a far superior method in getting Googlebot to discover a new paginated page.
I did create a hypothesis for both of the tests before running them, but I would not say this theory is proven using “scientific methods”.
Each paginated page had unique content and none were added to the XML Sitemap of the website. This was to ensure that each page had the best chance of being seen as unique and valuable by Google’s systems.
Test 1 – rel=“next” and “prev” discovery test
The first test was quite simple: Can Googlebot crawl links ONLY found in rel=“next” and “prev” tags?
For the first half of the test, I added a paginated series (page-1, page-2, page-3) and added rel=”next” and “prev” tags to each paginated page in the sequence.
I linked the first page of the series to the homepage so Googlebot could discover the first page in the series.
Test 1 – Results
After 10 days of adding the paginated page series to the website, the log files showed that Googlebot didn’t discover the paginated pages (page-2 or page-3).
Although this was a very small period of time after I started the test, I had another idea and theory I wanted to test out. So, I did another quick test along-side this one to see if adding a crawlable link is picked up quicker by the web crawler.
Test 2 – Crawlable link to paginated page test
The second idea I had was again quite simple: Can Googlebot discover a new paginated page a lot quicker when linked to using a crawlable link?
For the second half of the test, I created a new paginated page (page-4) and added a crawlable link to the new page from the homepage.
Again I updated the rel=”next” and “prev” tags to include the new page in the paginated series.
Once the test was set-up I submitted the homepage for re-crawling using the URL Inspection Tool in Google Search Console.
Test 2 – Result
What was quite surprising was that Googlebot discovered and crawled the new paginated URL in an anchor link seconds after it fetched the homepage.
After 17 days since the start of the test Googlebot had still only crawled the paginated page with the crawlable link on the homepage.
After 30 days I stopped the test. None of the paginated pages in the series (apart from page-4) were discovered and crawled by Googlebot.
Test 1 and Test 2 Conclusion
The hypothesis for test 1 was disproved when analysing the log file data.
The conclusion from the test was that Googlebot does not use rel=”next” and “prev” tags as a discovery mechanism to find new pages when crawling a website.
The hypothesis from test 2 was proven (quite shocked at how fast) when Google found the new paginated page seconds after being submitted to the URL Inspection Tool in Google Search Console.
It also proved in this example that a crawlable link is a far superior method in getting Googlebot to discover a new paginated page.
I’d strongly recommend that paginated pages are linked to using crawlable links so Googlebot can discover and find these page types.
To learn more about SEO friendly pagination best practices you can read my Ultimate Pagination SEO Guide or my study State of the Web: Search Friendly Pagination and Infinite Scroll.