Both Thunderbird and Firefox ESR were updated to version 68 on my Windows computers a long time ago, but I still have version 60 on Ubuntu. PPA is https://launchpad.net/~mozillateam/+archive/ubuntu/ppa
I have a website where I publish articles that I started just 2 weeks ago. I'd like to keep the pages as clean as possible and load more content (links to other articles) from AJAX requests to user action (for now, clicks). I read a bit. Most of the articles and blog posts on this topic were outdated. I understand that Google used to support crawling AJAX requests, but not anymore. Some papers also recommend using methods that provide content by pagination. I also read about sitemaps. I know that it gives search engine crawlers an indication of which pages to search.
However, will crawlers find inconsistencies because these links are out of reach and can only be accessed by clicking the Load More button? Does a sitemap make sure the crawlers visit the URL?