Removing a property from Google Console removes only the site from Google Console.
I'm not sure what your goal is. However, you can use robots.txt to remove your site from Google, such as: B. by using …
… or all search engines
Each search engine has its own bot name. For example, Bing is a Bingbot.
Robots.txt is a simple text file in the root of your website. It should be available as example.com/robots.txt or www.example.com/robots.txt.
You can read about robots.txt at robots.org
For a list of major search engine bot / spider names, see the top search engine bot names.
Using the robots.txt file and the correct bot name is generally the quickest way to remove a website from a search engine. Once the search engine has read the robots.txt file, the site will be removed within about 2 days unless anything has changed lately. Google has deleted websites within 1-2 days. Each search engine is different and the responsiveness of each can vary. Please note that the major search engines react relatively quickly.
To address the comments.
Robots.txt is in fact used by search engines to know which pages need to be indexed. This is well known and well known and since 1994 a de facto standard.
That's how Google works
Google indexes, among other things, links, domains, URLs and page content.
The linkage table is used to discover new sites and pages, and to rank pages using the PageRank algorithm, which is based on the trusted network model.
The URL table is used as a link between links and pages.
If you know the SQL database schema,
The linkage table would look something like this:
The domain table would look something like this:
The URL table would look something like this:
The page table would look something like this:
The URL table is a linkage table between domains, links, and pages.
The page index is used to understand and index the contents of individual pages. Indexing is far more complicated than just an SQL table, but the illustration remains.
If Google follows a link, the link will be added to the link table. If the URL is not in the URL table, it is added to the URL table and passed to the fetch queue.
When Google retrieves the page, Google checks to see if the robots.txt file has been read, and if so, if it has been read within 24 hours. If the cached robots.txt data is older than 24 hours, Google will retrieve the robots.txt file again. If a page is restricted by robots.txt, Google will not index the page and will not remove the page from the index if it already exists.
When Google detects a restriction in robots.txt, it is sent to a queue for processing. The processing starts every night as batch processing. The pattern is matched against all URLs, and all pages are removed from the page table with the URL ID. The URL is kept for management.
Once the page has been retrieved, it will be inserted into the page table.
Any link in the join table that was not retrieved or restricted by robots.txt, or a bad link to a 4xx error, is called dangling links. And while PR can be calculated using the trust network theory for dangling links landing pages, PR can not be routed through these pages.
About 6 years ago, Google found it advisable to include drooping links in the SERPs. This happened when Google redesigned the index and systems to aggressively capture the entire Web. The idea was to present valid search results to users, even if the page was locked by the search engine.
URLs have very little or no semantic value.
Links have a certain semantic value, but this value remains low because semantic indexing prefers more text and does not perform well as a stand-alone element. Normally, the semantic value of a link is measured along with the semantic value of the source page (the page with the link) and the semantic value of the landing page.
As a result, a URL to a landing page of a dangling link may not have a good ranking at all. The exception are newly discovered links and pages. Typically, Google "tries" newly discovered links and pages within the SERPs by setting the PR values high enough for them to be found and tested within the SERPs. Over time, PR and CTR are measured and adjusted to place links and pages where they should exist.
See ROBOTS.TXT DISALLOW: 20 years of mistakes to avoid, which also discusses the ranking I've described.
Listing links in the SERPs is wrong and many have complained about it. It pollutes the SERPs, for example, with broken links and links behind logins or paywalls. Google has not changed this approach. The ranking mechanisms, however, filter out the links from the SERPs and effectively remove them completely from the SERPs.
Remember that the indexing engine and the query engine are two different things.
Google recommends using noindex for pages that are not always possible or practical. However, I use noindex for very large sites that use automation. This can be impossible or at least cumbersome.
I had a website with millions of pages that I removed from the Google's index within a few days using the robots.txt file.
And while Google is against using the robots.txt file and using noindex, this is a much slower process. Why? Because Google uses a TTL-style metric in its index that determines how often Google visits this page. This can be a long period of time, which can last up to a year or more.
Using noindex does not remove the URL in the same way from SERPs as robots.txt. The final result remains the same. As it turns out, Noindex is actually no better than using the robots.txt file. Both produce the same effect, while the robots.txt file renders the results faster and in large quantities.
And this is partly the point of the robots.txt file. It is generally accepted that users block entire areas of their site using robots.txt or block bots from the site completely. This is more common than adding noindex to pages.
Removing an entire website using the robots.txt file is still the fastest way, even if Google does not like it. Google is neither God nor his website the New Testament. As much as Google tries, it still does not rule the world. Damn near, but not quite.
The claim that blocking a search engine using robots.txt prevents the search engine from seeing a noindex meta tag is utter nonsense and contradicts logic. You see this argument everywhere. Both mechanisms are in effect exactly the same, with the exception that one is much faster due to mass processing.
Keep in mind that the robots.txt standard was introduced in 1994, while the noindex meta tag was not yet adopted by Google in 1996. In the beginning, removing a page from a search engine meant using the file "robots.txt" file and stayed that way for a while. Noindex is just an extension of the existing process.
Robots.txt remains the # 1 mechanism to limit what a search engine will index and likely to do while I'm alive. (Be careful when crossing the road, no more skydiving for me!)