ubuntu – Nginx Directory Index is Forbidden

I Have Laravel Rest Api for mobile app running under ubuntu – nginx and every thing is working just fine till today, woke up and users can’t access the api and I check nginx error log and found below

2021/04/18 01:21:52 (error) 2772#2772: *138808 directory index of "/var/www/html/mydomain/public/" is forbidden, client: 9x.1x.1x.5x, server: mydomain.com, request: "GET / HTTP/1.>
2021/04/17 23:16:01 (error) 2772#2772: *138792 directory index of "/var/www/html/mydomain/public/" is forbidden, client: 4x.15x.20x.2x1, server: mydomain.com, request: "GET /?XDEBUG>

this is my Nginx config :

server {

    
    root /var/www/html/mydomain/public;

    # Add index.php to the list if you are using PHP
    index index.php;

    server_name mydomain.com www.mydomain.com;

    location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ /index.php?$query_string;
    }

    # pass PHP scripts to FastCGI server
    #
    location ~ .php$ {
        include snippets/fastcgi-php.conf;
    #
    #   # With php-fpm (or other unix sockets):
        fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
    #   # With php-cgi (or other tcp sockets):
    #   fastcgi_pass 127.0.0.1:9000;
    }

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    location ~ /.ht {
        deny all;
    }

    listen (::):443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot


}

server {
    if ($host = www.mydomain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    if ($host = mydomain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    listen 80;
    listen (::):80;

    server_name mydomain.com www.mydomain.com;
    return 404; # managed by Certbot

No one changed any thing on the server side and it was working, what is the issue here

Appreciate any help and ideas this is a live project

truecrypt – How can I get Tracker to index a loop device

Having recently upgraded from Ubuntu 16.04 to Ubuntu 20.04, I don’t get Tracker to work as it used to before: I am still using Truecrypt (which, I know, is “outdated”) in order to encrypt my data and I rely on Tracker to index the corresponding loop device provided by Truecrypt. In Ubuntu 16.04 this all worked well, even though I had to install Tracker myself back then: There simply was no indexing service “hard-wired” to Ubuntu with version 16.04.
Now, however, with Tracker being the indexing system that comes with Ubuntu by default, I get my home folder indexed just fine. However, Tracker does not generate any data for the Ubuntu loop device, even though I added the corresponding path (being: /media/truecrypt1) to dconfEditor: /org/freedesktop/tracker/miner/files/index-recursive-directories. (I even enabled “index-removable-devices” and “index-optical-discs”, which I don’t really think I need to do.
Does anyone have an idea on how to get tracker back to indexing my data?
Thank you such a great deal for your support.

Time-Complexity Verification: Code with two loops with an index halved at each iteration

I have the following code in python and was asked to find the tightest upper-bound in terms of Big-O , I’ve done two attempts below and I don’t know which one is right, can you help me verify as to which one is the right answer/approach?

def f1(L):
   n = len(L)
     while n > 0:
        n = n // 2
        for i in range(n):
          if i in L:
             L.append(i)
   return L

My attempts:
Approach 1:
While loop runs $ log(n) $ times. And at the $ith$ iteration the for-loop runs $ frac{n}{2^i} $ times; inside the for-loop the conditional runs at most $ O(n) $ times ( because “in” has complexity of $ O(n) $ according to https://wiki.python.org/moin/TimeComplexity ). Thus, the time-complexity of the for-loop is $O(n^2) $. So the time-complexity of total code is: $ sum_{i=1}^{log(n)} O(frac{n^2}{2^i}) = O(sum_{i=1}^{log(n)} frac{n^2}{2^i} ) = O( n^2 cdot frac{1-(1/2)^{1+log(n)}}{ 1-(1/2)^{log(n)} } ) = O(n^2) $

Approach 2:
In the for-loop we have the conditional “if i in L” , the “in” costs $ O(n) $, thus time-complexity of for-loop is $ sum_{i=1}^{n} O(n) = O( sum_{i=1}^{n} n ) = O(n^2). $ Looking at the while loop we see that “n” is halved at each iteration because of the statement “n=n//2” . Denote $ n_k = lfloor frac{n}{2^k} rfloor $ as the value of $n$ at the k-th iteration; Disregarding the floor function ( we won’t care about $ pm 1 $ for the value of $ n_k $ since we care about time-complexity ), we’ll seek the smallest $ k $ ( we denote $ k $ as the iteration of the while loop ) where $ n_k = 1 leq frac{n}{2^k} iff k leq log(n) $. Hence the total time complexity of code is $ sum_{i=1}^{log(n)} O(n^2) = O(log(n)) cdot O(n^2) = O(n^2 cdot log(n) ) $

sql server – Table scan instead of index seeks happening when where clause filters across multiple tables in join using OR

Firstly, thank you for providing the actual execution plans for both cases, that is one of the best things for us to help troubleshoot performance problems.

Secondly, the issue you’re facing is due to the difference in Cardinality between the first query and second query, which in a few words is the number of records your query might return relative to how many records are in the tables themselves, for the predicates (conditions in the JOIN, WHERE, and HAVING) specified.

When SQL Server analyzes your query, its Cardinality Estimator uses statistics the server stores about the tables involved to try to make a reasonable estimate on how many rows will be returned from each table in your query. Then it the execution plan is generated based on this information, as different operations are more efficient in different situations with different amounts of rows to be returned.

For example, if your query results in a high Cardinality (lot of records being returned), generally an index scan is a more performant operation than an index seek because there is a higher likelihood the index scan will encounter a majority of your records sooner than it would’ve trying to seek out each one individually.

Sometimes the Cardinality Estimator gets confused based on the conditions in your predicates causing it to misestimate the cardinality resulting in performance issues. One way to very you have a cardinality estimate issue is by comparing the Estimated Number of Rows to the Actual Number of Rows in the actual execution plan. If they are off by a significant amount (e.g. a magnitude or more) then likely there’s a cardinality estimate issue.

Finally, sorry to get your hopes up, but your execution plans don’t seem to be indicative of a cardinality estimate issue. It does seem to be your second execution plan is estimating the cardinality correctly, and it truly is a case where the conditions of your WHERE clause truly result in enough rows for SQL Server’s engine to think an index scan operation will be more performant here than an index seek. As you’ll notice both your Estimated Number of Rows and Actual Number of Rows in your second execution plan are now about 1.5 million rows.

That being said, even with accurate statistics and cardinality estimates sometimes the engine is just plain wrong. You can test this by using the FORCESEEK index hint which in your query’s case would be appended after the table like FROM T1617 WITH (FORCE SEEK), for example. Fair warning, index hints are only recommended for use in production code with extended testing, as when used incorrectly can lead to worse performance. But FORCESEEK is a relatively benign one when appropriately used, and can help correct some uncommon cases where the engine is wrong about which operation will be faster.

replacement – How to define an index list of local variables for arbitrarily long

Suppose initially I have a list that looks like

list1 = {A, B, C}

where the elements A, B, C are all matrices. I want to substitute all the three elements by list1 itself

list2 = ReplacePart[list1, {i_} -> list1]

which gives me something that looks like

list2 = {{A, B, C}, {A, B, C}, {A, B, C}}

And then, I want to do this another time, substitute all elements in list2 by list1, then I say

list3 = ReplacePart[list2, {i_, j_} -> list1]

Eventually, suppose I want to do this for 10 times. At the end of the day, I have to write a list of 10 local variables

{i_, j_, k_, l_, m_, n_, ...}

My question is, how to define the index list of local variables as arbitrarily long? So that I can just tell Mathematica the length of the list and I don’t have to write them out explicitly.

Thank you so much!

python – Creating nxm index list of array a

Here is the problem:

Given a numpy array ‘a’ that contains n elements, denote by b the set
of its unique values ​​in ascending order, denote by m the size of
array b. You need to create a numpy array with dimensions n×m , in
each row of which there must be a value of 1 in case it is equal to
the value of the given index of array b, in other places it must be 0.

import numpy as np


def convert(a):
    b = np.unique(sorted(a))
    result = ()
    for i in a:
        result.append((b == i) * 1)
    return np.array(result)


a = np.array((1, 1, 2, 3, 2, 4, 5, 2, 3, 4, 5, 1, 1))
print(convert(a))

This is my solution. is there some improvments that I can make?
I’m not sure about declaring regular list to the result and then converting it into np.array.