Web crawlers – Are AWS signed URLs crawled by Google?

I have used Amazon pre-signed URL to share content.

https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html

Can Google crawl this URL? I share this URL with just one customer. What about other services? There are some topics that you can use to share content by creating a seemingly random URL (or even using hashes), such as: For example, www.somedomain.com/something/15b8b348ea1d895d753d1acb57683bd9
Will this URL be crawled by Google or other search engines?

Thank you very much

googlebot – URL that can be crawled in the old Google search console but locked in the new one

Google stopped crawling images for our site when we moved images to the cdn subdomain.

The status image of the old search console can be crawled, the new one indicates that it is blocked by robots.txt (for the same link).

Our robots.txt only has the following:

User-agent: *

Does anyone know why that is?

Enter image description here
Enter image description here

SEO – Will a Sitemap ensure that pages serviced by AJAX queries are crawled?

I have a website where I publish articles that I started just 2 weeks ago. I'd like to keep the pages as clean as possible and load more content (links to other articles) from AJAX requests to user action (for now, clicks). I read a bit. Most of the articles and blog posts on this topic were outdated. I understand that Google used to support crawling AJAX requests, but not anymore. Some papers also recommend using methods that provide content by pagination. I also read about sitemaps. I know that it gives search engine crawlers an indication of which pages to search.

However, will crawlers find inconsistencies because these links are out of reach and can only be accessed by clicking the Load More button? Does a sitemap make sure the crawlers visit the URL?

Seo – Crawled, currently unindexed, sustainably indexed sites are decreasing! What could that cause?

Crawled, currently not indexed, sustainably indexed sites are decreasing! What could that cause?

These pages are slowly being deindicated. I lose a few hundred pages a week. These pages are indexed by Submitted and Currently Unindexed.

Checking "Crawled, not currently indexed" displays everything in green, with a referring page and the message "Indexable". "The URL will only be indexed if certain conditions are met". There is nothing wrong with the pages. I am not sure what to do to stop the deindication. The site is ncservo.com if it helps.

SEO URL parameters in GSC contain strange values ​​for recently crawled URLs

I have been fighting for some time against the bug in Google's canonical page algorithm. One piece of advice I got is to set the GSC URL parameters to "Any URL," since we're using a Page Generation script with the Page parameter and not "Let Googlebot decide." If I stop this and click "Show sample URLs", GSC will display the following for recently crawled URLs:

index.pl?page=nhcuofak
index.pl?page=mgiwznbsiwhmbh
index.pl?page=cbmtogqjbgakj
index.pl?page=kzktuwhan
index.pl?page=uxuatqqr
:
:

I also attached a screenshot: Screenshot Certainly none of these pages are available on our web server. As far as I can tell, our GSC account has not been hacked, at least I do not see any evidence that anyone except me has made indexing requests. Entering one of these parameters causes our site to return a hard 404 value. Why would Google crawl with random page parameter values? And another question: Could this affect Google's canonical site selection?

SEO – Google Search Console has crawled my shopify website, but does not appear on Google Search with the site: prefix

About a month ago, I sent my Shopify site to the Google search console and set the sitemap.xml path, which points to products, blogs, and collections. I used the site: Google prefix as follows: site: nodosperu.com, but I do not see any of my product pages there. So I checked the configurations again and for some reason the products were not listed. That's why I manually added the sitemap of the products along with the site's sitemap because it was not recognized.

Enter image description here

Unfortunately, the problem could not be solved. I checked the robots.txt file and do not exclude any of the files.

Is there any other setting I need to change? Currently I can only see 4 URLs in Google Search.

Enter image description here

Any help would be appreciated, thanks.

Search Results – Are all attachments in the list crawled in SP Online?

I'm having trouble returning results from my sharepoint list.
I have attached some JPG, MS Word / Excel and PDF files and sometimes sometimes I get results now.
Do you have any idea what is actually crawled in the content sources for SP Online?
How do I know when the resource was last crawled?
If someone from the MS team can give a hint here
thank you in advance

SEO – Google searches my URL, but the crawled page is the old version

Google constantly scans my website as it is constantly updated. However, the title tag and meta description are not updated in the SERP results.
So I checked the crawled page and it does not crawl the updated content, but the old content.
I've updated the content and forced re-crawling, but where did they find the old content?
It's been over 2 weeks.

Screenshot:
Crawled Page – http://prntscr.com/nry9s6, but the updated version is 6.12