How to perfectly wipe a duplicated project (statistics & URLs & emails) in GSA SER ?

I wanted to duplicate an already used project and couldn’t find out how to clear everything in the new copy. To delete the URL cache, the URL history and the accounts is not enough. The statistics (Submitted/Verified) for example keep the data from the original project. I’m sure there is some more data kept in the background too. Who can help please?

urls – Double slash // instead of https: – how to fix it?

In the page code, I see paths to images without https:, just a double slash //. The image optimizer refuses to see such images and processes them only with https:. Any ideas how to fix it and where it came from? Can anyone come across?

The Theme Amuli + Elementor.

export – Get shareable URLs of all YouTube Music playlists automatically


  1. Go to your Playlists page.
  2. Scroll to the bottom of the Playlists page (making sure that all your playlists have loaded).
  3. Press F12 to open your console.
  4. Paste this JavaScript into your console.
    const playlists = ();
    document.querySelectorAll("ytmusic-two-row-item-renderer").forEach(playlist => {
      const title = playlist.querySelector(".details > .title-group").innerText; 
      const url = playlist.querySelector("a").href;
      if(title !== "New playlist" && title !== "Your Likes")
        playlists.push({title, url});
  5. (optional) To sort it, enter this:
    playlists.sort((a,b) => (a.title > b.title) ? 1 : ((b.title > a.title) ? -1 : 0));
  6. Do what you’d like with the array of objects. I provide a few options below.

If you want the output in JSON form, just stringify it:

JSON.stringify(tracklistObj, null, 't');

If you want a plain text version:

let myString = "";
playlists.forEach(list => myString += `${list.title} - ${list.url}n`);

If you want to have the output be an HTML list:

let myHTML = "<ul>";
playlists.forEach(list => myHTML += `<li><a href="${list.url}" target="_blank">${list.title}</a></li>` );
myHTML += "</ul>";

how to stop limit number of urls posted

ive set a project to

“stop the project after xxx verifications per url”
under project>options

it has already done multiples of what its supposed to do & just keeps doing more,
its the indexer engine, its done about 1500 in half a day for each project,
 i just want it to do a steady amount & put the rest in the lists for when they are needed,
but apart from turning that engine off completely (which i would have to do for each project)
how do you get it to stay at the levels you have set it to at

its not 20-30% more -its not going to stop

Random URLs to hide assets [duplicate]

Say I own a domain name and I’d like to host some resources on it, but being reasonably hidden from the public.

Since a 256 bits has a sufficiently large entropy to prevent an exhaustive traversal, is it correct to consider any URL from my website that contains a hash, as hidden and secured ?

For example if I want to temporarily host a secret item on the domain, I could simply put it at
  • Crawlers won’t be able to find it
  • Malicious users won’t be able to find it
  • The URL won’t be published anywhere, so search engines won’t index it (kinda linked to point 1)
  • It won’t be in the sitemap

I’m just wondering if there are any practical applications of that kind of scheme in the real world ?

python – Creating names from URLs

I am creating a NAME column in a DataFrame and set its value based on substrings contained in another column.

Is there a more efficient way to do this?

import pandas as pd
df = pd.DataFrame((('','low'), ('','high')),

df('Name') = df('URL')  #is this an intellingent thing to do?

#set Name based on substring in URL
df.loc(df('Name').str.contains("pandas", na=False), 'Name') = "PANDAS"
df.loc(df('Name').str.contains("python|pitone", na=False), 'Name') = "PYTHON"