Rewriting URLs – Rewrite the question mark in the post template to get a nice URL

I hope you are all well and you are safe in these times.

I was wondering if anyone can help me remove the question mark (?) In the url.

The URL I want to rewrite looks like this:

The url i want to have:

Please tell me what you think

Docker – Nginx rewrites URLs to match the proxy address

I use a WordPress Docker container. The site can be accessed through the host computer's port 8000 when i go to localhost: 8000 boom i see my wordpress site.

It is boring to always type localhost:8000 To see my website, I decided to use nginx as a reverse proxy for my website. I set up a virtual host in Nginx that has the name I can now visit via WordPress site

Up to this point we are fine, though opens, I can see a list of my blog posts, say I want to read my latest blog post about COVID-19 when I click the link, ohohohoho it opens as http://localhost:8000/posts/covid19

I want it to open with the proxy url like in I need the whole page to be accessible through it Site name,

I need nginx to rewrite all of my links localhost:8000/* to* No body loves to enter ports when accessing a blog.

This is what my Nginx Conf file looks like

server {
        listen 80;
        listen [::]:80;

        root /var/www/;
        index index.html index.htm index.nginx-debian.html;


        location / {
                proxy_pass http://localhost:8000;
                #proxy_set_header HOST $host;
                #proxy_redirect http://localhost:8000/ ;
                #try_files $uri $uri/ =404;

How can I get all URLs on the proxy site to be rewritten with my custom hostname?

 server {
        listen 80;
        listen [::]:80;

        root /var/www/;
        index index.html index.htm index.nginx-debian.html;


        location / {
                proxy_pass http://localhost:8000;
                #proxy_set_header HOST $host;
                #proxy_redirect http://localhost:8000/ ;
                #try_files $uri $uri/ =404;

How can I get all URLs on the proxy site to be rewritten with my custom hostname?

web – Redirect different DDNS URLs pointing to the same dynamic public IP to different subdomains in Nginx?

I am trying to host multiple subdomains and multiple websites hosted on a Nginx server in the same virtual machine. I want them to be accessible from the Internet via a DDNS. In this case I use Noip.

I have already achieved this with a single website, I had installed a Jitsi meet, created an FQDN in etc / hosts that pointed to the local IP, the router has a port forward and the Nginx has the URL of the NoIp with a server name captured and opened the resource (Jitsi).

However, if I want to do it with multiple pages, change etc / hosts with multiple subdomains that point to the same local IP, have a port forward and create multiple servers {} with a different server name for each DDNS URL registered in noip and that Use redirection For each subdomain registered in etc / hosts, I cannot access the Internet. In nginx files, I also added server blocks with server names with the name of each subdomain, and everyone accesses different folder resources. Locally only from the virtual machine I'm working on, if it is possible to put different URLs of the subdomains registered in etc / host in the browser, and each opens a different resource (a website). However, this is not possible via the Internet; there is no redirection.

The error that a browser notifies me of outside the virtual machine is that it cannot find the IP address of the URL to which it redirects, ie the IP address of the local subdomain. Sometimes I am told that the resource has rejected the connection. I don't know if the problem is with the etc / hosts file (a DNS problem) or a firewall problem.

I've also seen some use proxy mode to redirect, but this is useful if you have multiple virtual machines with different IP addresses that I understand. If you only have one computer, using a simple redirect to subdomains is better and provides better performance.

possible new destination URLs from current accounts.

I keep getting the same message, I deleted / blocked the domain / URL in both the global system and the specific project (after deactivating all but one of the projects to isolate the problem).
The same message is again and again: –
15:46:00: (-) 1/1 PR-0 too low –
15:46:00: (+) 001 possible new destination URLs from current accounts.
it is already listed in
Project> Options> Skip websites with the following words in URL / Domain

Rewrite URLs – Rewrite rules are redirected

An attempt is made to start a custom rewrite for a page.
Page Slug: / near me /
Create redirection: / near-me / STATE / city /
Even after rinsing the permalinks.

//* Add Rewrite Rule
add_action('init', 'sym_nearme_rewrite_rule', 10, 0);
function sym_nearme_rewrite_rule() {

urls – Where can you add a standalone Flash folder to WordPress?

How would I add a standalone Flash folder / application to WordPress?

Suppose the folder is called "Flashfolder". Access should be via a direct URL, e.g. B. The actual content of the folder is all that is needed to run it. It's kind of a slideshow. If you access the folder locally through the browser, the application will run properly.

I have the folder next to & # 39; wp-admin & # 39 ;, & # 39; wp-content & # 39; etc. added to the root WP level. However, when I try to access this remote URL ( the page is blank. But there is no 404, so I think it will reach the correct URL?

I think that something is restricting access to this URL, just as it is not possible to go directly to I am also not sure whether I have placed the folder in the right place. Any help would be appreciated.

Python – Scraping data from multiple URLs into a single data frame

I have a class that: 1) goes to a URL 2) scratches a link and date (filing_date) from this page 3) navigates to the link and 4) scratches the table The Page to a data frame.

I want that too filing_date added to the data frame from step 2, but it is not written to the data frame properly, probably due to the way I pass the data between functions within the class. So instead of passing the corresponding filing data for each line, as follows:

                     nameOfIssuer                cik Filing Date
0    Agilent Technologies, Inc. (A)  ...  0000846222  2020-01-10
1                 Adient PLC (ADNT)  ...  0000846222  2020-01-10
..                             ...   ...         ...         ...
662            Whirlpool Corp (WHR)  ...  0000846222  2010-07-08

it just happens last Date deleted from the previous page in all lines:

                     nameOfIssuer                cik Filing Date
0    Agilent Technologies, Inc. (A)  ...  0000846222  2010-07-08
1                 Adient PLC (ADNT)  ...  0000846222  2010-07-08
..                             ...   ...         ...         ...
662            Whirlpool Corp (WHR)  ...  0000846222  2010-07-08

I tried to save the data in an empty list and then append it to the output data frame, but since the length of the list does not match the list of the data frame, I get ValueError: Length of values does not match length of index.

Can anyone guess what the best approach would be (e.g. to use another function exclusively? filing_date or maybe return a data frame instead)?

import pandas as pd
from urllib.parse import urljoin
from bs4 import BeautifulSoup, SoupStrainer
import requests

class Scraper:
    BASE_URL = ""
    FORMS_URL_TEMPLATE = "/cgi-bin/browse-edgar?action=getcompany&CIK={cik}&type=13F"

    def __init__(self):
        self.session = requests.Session()

    def get_holdings(self, cik):
        Main function that first finds the most recent 13F form and then passes
        it to scrapeForm to get the holdings for a particular institutional investor.
        # get the form urls
        forms_url = urljoin(self.BASE_URL, self.FORMS_URL_TEMPLATE.format(cik=cik))
        parse_only = SoupStrainer('a', {"id": "documentsbutton"})
        soup = BeautifulSoup(self.session.get(forms_url).content, 'lxml', parse_only=parse_only)
        urls = soup.find_all('a', href=True)

        # get form document URLs
        form_urls = ()
        for url in urls:
            url = url.get("href")
            url = urljoin(self.BASE_URL, str(url))

            headers = {'User-Agent': 'Mozilla/5.0'}
            page = requests.get(url, headers=headers)
            soup = BeautifulSoup(page.content, 'html.parser')

            # Get filing date and "period date"
            dates = soup.find("div", {"class": "formContent"})
            filing_date = dates.find_all("div", {"class": "formGrouping"})(0)
            filing_date = filing_date.find_all("div", {"class": "info"})(0)
            filing_date = filing_date.text

            # get form table URLs
            parse_only = SoupStrainer('tr', {"class": 'blueRow'})
            soup = BeautifulSoup(self.session.get(url).content,'lxml', parse_only=parse_only)
            form_url = soup.find_all('tr', {"class": 'blueRow'})(-1).find('a')('href')
            if ".txt" in form_url:
                form_url = urljoin(self.BASE_URL, form_url)
                # print(form_url)

        return self.scrape_document(form_urls, cik, filing_date)

    def scrape_document(self, urls, cik, filing_date):
        """This function scrapes holdings from particular document URL"""

        cols = ('nameOfIssuer', 'titleOfClass', 'cusip', 'value', 'sshPrnamt',
                'sshPrnamtType', 'putCall', 'investmentDiscretion',
                'otherManager', 'Sole', 'Shared', 'None')

        data = ()

        for url in urls:
            soup = BeautifulSoup(self.session.get(url).content, 'lxml')

            for info_table in soup.find_all(('ns1:infotable', 'infotable')):
                row = ()
                for col in cols:
                    d = info_table.find((col.lower(), 'ns1:' + col.lower()))
                    row.append(d.text.strip() if d else 'NaN')

            df = pd.DataFrame(data, columns=cols)
            df('cik') = cik
            df('Filing Date') = filing_date

        return df

holdings = Scraper()
holdings = holdings.get_holdings("0000846222")

2013 – Rearrange managed navigation but keep existing URLs

We have a website that has been used across the company for several years. It uses managed navigation to create user-friendly URLs. It contains a lot of published content and various discussion forums. For us it was a very successful site with a lot of usage.

The original navigation menu at the top of the page was pretty flat with only a few choices that led to landing / query pages, which then went further in content. This was a specific decision that was made at the beginning of the design process and was fine. However, in our user surveys we learned that we need to improve navigation.

What we have

For example, the site is aimed at different groups without our structure – the administrator group, the user group and the general interest group. The top navigation menu currently contains only one menu item for groups Some information about the three actual groups is displayed on the group's landing page.

  • Groups (in Navi and friendly URL)
    • Admins (friendly URL only)
    • User (friendly URL only)
    • General topics (friendly URL only)

Many people have bookmarked the friendly URLs for their specific group (s), and we have many accompanying materials with links directly to friendly URLs such as site/groups/admins or site/groups/admins/discussions.

What we want to do

Our interviews with users showed that we are more likely to switch to a mega-menu style with headings and rearrange some of the orders. We want to rearrange existing nodes and reposition them in new locations, but we want to keep the existing URLs as they have been popular and used for years.

  • Get involved (heading just not clickable)
    • Groups (headline not only clickable)
      • Administrators (link to / site / groups / admins)
      • Users (link to / site / groups / users)
      • General interest (link to / site / groups / general)
      • All groups (link to / site / groups)
    • Some new ways to get involved

The new menu items are not a problem as they are new things that we can easily add. The problem is to rearrange the existing navigation terms. For example, groups used to be a first level node, but now it's organized on two levels. This means that the managed navigation tries to set the page on site/involved/groups/admins.

I realized that I could go to the Term Store and configure the friendly URL. I tried to set the value of Get involved Knot be empty and it did Let me do it.

Enter the image description here

The navigation has been updated and when I move the mouse pointer over the link in the menu, the desired URL is displayed site/groups/admins, but when I click the link in the navigation, I get a page not found error. So I don't think that will actually work.

Another solution I am thinking of is to create a full shadow navigation that retains the original URLs but hides them from the navigation system, and then a new set of nodes to display in the navigation system. These would provide the required structure for new URLs, and then all of these new URLs would point to a redirect page that would go to the old known URLs. Seems to be very labor intensive and likely to break or be difficult to hold.

Any other suggestions on how we can revise our satnav but keep existing URLs? (We are only administrators of website collections, no access to the central administrator or the like).

Google Sheets – Open multiple URLs in different new browser tabs

My question relates to the previous answer here:

How do I open multiple URLs in different new Google Sheets browser tabs with a single click?

I was just trying to copy and paste the code and perform the functions in Apps Script, but it doesn't open the URLs in new tabs.

If I let that go TestOpenTabs() Nothing happens.

What am i missing

Does this solution still work?