reverse proxy – Why “/” nginx location rule fails to catch some URLs? Isn’t it supposed to go with all?

I have an nginx that serves as reverse proxy, and it redirects requests to an angular app or to a node js backend app depending on the request URL. There is also a rule location ~ /s/(cas)/(.*) that serves static content (although I’m seeing now that if “/” caught this route too, it would not be necessary to have that rule, as static content is also kept at backend:4000).

My concern is particular to the most general rule “/” that is supposed to catch all requests that did not fall into any other location, it is not applying correctly to some URLS causing nginx to send its 50x.html error page. In particular, my problem is that this redirection seems to not catch all traffic that didn’t fit a previous rule. And is the one rule in charge of redirecting the traffic that should land on the angular app.

If I’m correct, this should fall under the “/” rule:

https://SUBDOMAIN.DOMAIN.es/user/trip/13925/instant?sharedToken=(REDACTED)

And these should at least be redirected correctly by the “/” rule, but also show the nginx fail page after a lot of timeout:

https://SUBDOMAIN.DOMAIN.es/user/trip/foo/instant?sharedToken=(REDACTED) # changed id for "foo"
https://SUBDOMAIN.DOMAIN.es/user/trip/instant?sharedToken=(REDACTED) # removed id segment of url
https://SUBDOMAIN.DOMAIN.es/user/instant?sharedToken=(REDACTED) # also removed "trip" segment of url

Any other variation of the url works fine and is redirected to https://backend:4000.

So, why aren’t these rules caught by the location “/”?

This is the nginx config file. Domain and subdomain have been omitted on purpose:

server {
    listen 443 ssl http2;
    listen (::):443 ssl http2;
    expires $expires;
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomains" always;
    server_name (SUBDOMAIN).(DOMAIN_NAME).es;
    ssl_certificate /etc/nginx/ssl/CERT.crt;
    ssl_certificate_key /etc/nginx/ssl/CERT.key;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
    ssl_session_cache shared:SSL:5m;
    ssl_session_timeout 1h;
    gzip on;
    gzip_disable "msie6";
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

    location ~ /api(?<url>/.*)  {
        resolver 127.0.0.11;
        set $target http://backend:5000/api${url}$is_args$args;
        proxy_set_header X-Forwarded-Host $host;     # Relay whatever hostname was received
        proxy_set_header X-Forwarded-Proto $scheme;  # Relay either http or https
        proxy_set_header X-Forwarded-Server $host;   # Relay whatever hostname was received
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Prefix /api/;
        proxy_set_header Host "SUBDOMAIN.DOMAIN.es";

        add_header Access-Control-Allow-Origin *;
        add_header Access-Control-Max-Age 3600;
        add_header Access-Control-Expose-Headers Content-Length;
        add_header Access-Control-Allow-Headers Range;

    ## Websockets support 2/2
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    ## END Websockets support 2/2

        proxy_pass $target;
        client_max_body_size 10M;
    }

    location ^~ /_assets/ {
        alias /usr/share/nginx/html/assets/;
    }

    location ^~ /.well-known/acme-challenge/ {
        alias /usr/share/nginx/html/.well-known/acme-challenge/;
    }

    location ~ /s/(cas)/(.*) {
        add_header Pragma "no-cache";
        add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0";
        proxy_pass http://backend:4000;
    }

    location / {
        #root /usr/share/nginx/html;
        proxy_pass http://backend:4000;
        expires -1;
        proxy_set_header X-Forwarded-Host "SUBDOMAIN.DOMAIN.es";
        proxy_set_header X-Forwarded-Server "SUBDOMAIN.DOMAIN.es";
        proxy_set_header Host "SUBDOMAIN.DOMAIN.es";

        add_header Pragma "no-cache";
        add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0";

        add_header Access-Control-Allow-Origin *;
        add_header Access-Control-Max-Age 3600;
        add_header Access-Control-Expose-Headers Content-Length;
        add_header Access-Control-Allow-Headers Range;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}

Unity: NavMeshAgent fails to find paths in certain areas in NavMesh

I’ve been using Unity for about 2 days, and I have a scene with a navmesh. I can click on the NavMesh and get the NavMeshAgents to move to that position most of the time. However, there is a region of the NavMesh where it fails to generate a path to. I technically have two NavMeshes, because I have two different sized NavMeshAgents, but both fail in the same region. Since I’m very new to Unity, I don’t know how to diagnose this, much less fix it. How should I go about learning more about this problem?

EDIT: I can path through the affected area, as long as the destination point is on the other side of it. Also, when I put a cube in the middle of the affected area and rebake the NavMesh, the problem area moves to a new location, so it must have something to do with the geometry.

windows subsystem for linux – How to upgrade ubuntu 18.04 to 20.04 in WSL when “wsl –export” fails

I’m trying to follow the directions to upgrade my WSL 1 Ubunutu (18.04) release to WSL 2 Ubuntu-20.04 and the first step gives me an error message I don’t know how to work around.

wsl --set-version ubuntu 
Conversion in progress, this may take a few minutes...
For information on key differences with WSL 2 please visit https://aka.ms/wsl2
Exporting the distribution failed.
bsdtar: Couldn't read link data: I/O error
bsdtar: Error exit delayed from previous errors.

I have also tried doing it the “normal” Ubuntu way and get different “errors”:

$ lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.5 LTS
Release:    18.04
Codename:   bionic
cfclark@p53:
~
$ sudo apt update

Hit:1 http://ppa.launchpad.net/git-core/ppa/ubuntu bionic InRelease
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.
cfclark@p53:
~
$ sudo apt upgrade

Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
The following package was automatically installed and is no longer required:
  libdumbnet1
Use 'sudo apt autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
cfclark@p53:
~
$ sudo do-release-upgrade

Checking for a new Ubuntu release
Get:1 Upgrade tool signature (1554 B)                                                                                                                                                                       
Get:2 Upgrade tool (1340 kB)                                                                                                                                                                                
Fetched 1342 kB in 0s (0 B/s)                                                                                                                                                                               
authenticate 'focal.tar.gz' against 'focal.tar.gz.gpg' 
extracting 'focal.tar.gz'
(4l>7(r(?1;3;4;6l87(4l=)0(1;66rlspci: Cannot find any working access method.

Checking package manager
Reading package lists... Done
Building dependency tree
Reading state information... Done
Hit http://archive.ubuntu.com/ubuntu bionic InRelease
Hit http://ppa.launchpad.net/git-core/ppa/ubuntu bionic InRelease
Hit http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit http://security.ubuntu.com/ubuntu bionic-security InRelease
Fetched 0 B in 0s (0 B/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done

Restoring original system state

Aborting
Reading package lists... Done
Building dependency tree
Reading state information... Done
=== Command detached from window (Sun May 23 13:26:05 2021) ===
=== Command terminated with exit status 1 (Sun May 23 13:26:15 2021) ===

alter table – Supplying root password to MySQL V8 sort of fails

On Ubuntu 20.04 I have installed MySQL V8.0.25. But I kind of fail to apply a (valid) root password. The apt-get process for installing mysql-server did not ask for a root password.
And I cannot enter mysql with the “mysql -u roor -p” command, I always do with “sudo mysql”.

So i first tried with the process described by this MySQL page. But this did not work at all, because V8 seems not to support the PASSWORD-function.
So instead of running

UPDATE mysql.user
    SET authentication_string = PASSWORD('MyNewPass'), password_expired = 'N'
    WHERE User = 'root' AND Host = 'localhost';
FLUSH PRIVILEGES;

I used this:

UPDATE mysql.user 
   SET authentication_string = CONCAT('*', UPPER(SHA1(UNHEX(SHA1('MyNewPass'))))), password_expired = 'N' 
   WHERE User = 'root' AND Host = 'localhost';
FLUSH PRIVILEGES;

which I found described here.

The query
select host, user, authentication_string, password_expired from mysql.user;
shows a nice table:

   +-----------+------------------+------------------------------------------------------------------------+-----------------------+
| host      | user             | authentication_string                                                  | plugin                |
+-----------+------------------+------------------------------------------------------------------------+-----------------------+
| %         | joomla           | $A$005$MF`6ea"OfH5v1kTuRW0zJS5MKk82btugdAz62uWe6QkxnrXtTLtx5M. | caching_sha2_password |
| localhost | debian-sys-maint | $A$005$l%.r}2CBQT:+DV)a9S/UJUDJoFA8PhnCIE.E3zDFbBeUZ5vTrNSZpZDDv05 | caching_sha2_password |
| localhost | mysql.infoschema | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED | caching_sha2_password |
| localhost | mysql.session    | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED | caching_sha2_password |
| localhost | mysql.sys        | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED | caching_sha2_password |
s%nn69n9�NkFf7xoPdW/CCD/NjvLhTKXtx8gQmTX.RpIbOcHWsA. | caching_sha2_password |
| localhost | root             | mypass                                                                 | auth_stock            |  
+-----------+------------------+------------------------------------------------------------------------+-----------------------+

but still “mysql -u root -p” does not work with the suppiel password.
I still get message:

$ mysql -u root -p
Enter password: 
ERROR 1698 (28000): Access denied for user 'root'@'localhost'

I also tried this – without any success:

UPDATE mysql.user 
   SET authentication_string = 'mypass', password_expired = 'N', plugin = '' 
   WHERE User = 'root' AND Host = 'localhost';

The ALTER cmd did not work

neither

ALTER USER 'root'@'localhost' IDENTIFIED BY 'oldpass';
ERROR 1396 (HY000): Operation ALTER USER failed for 'root'@'localhost'

nor

ALTER USER 'root'@'localhost' IDENTIFIED BY 'oldpass' REPLACE 'mypass';
ERROR 1396 (HY000): Operation ALTER USER failed for 'root'@'localhost'

I spent now many hours in this and have no clue how to preceed. Any help is appreciated.

expect – autoexpect fails to run

I just installed the expect package (which is supposed to contain autoexpect).
When I run autoexpect, I get this error:

can't find package Expect
    while executing
"package require Expect"
    (file "/usr/bin/autoexpect" line 6)

Has anyone experienced else this problem? Were you able to get in running?

linux – cURL with NSS using a PKCS* cert fails with ‘SEC_ERROR_BAD_KEY’

On CentOS 7. There was a PKCS8 certificate generated on a Mac that was moved to a server. Trying to use it to connect to an API, but it returns the following:

# curl https://api.xxx.com --cert ./cert.pem
unable to load client key: -8178 (SEC_ERROR_BAD_KEY)

Which seems to be due to cURL being compiled with NSS. The following seem to cover the problem in detail:

https://bugzilla.redhat.com/show_bug.cgi?id=1440873

https://access.redhat.com/solutions/3390021 (don’t have a subscription to see what the solution may be)

https://stackoverflow.com/questions/22499425/ssl-certificate-generated-with-openssl-not-working-on-nss

The certificate file is formatted as follows (not sure if PKCS8 or PKCS11):

-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
Bag Attributes
  friendlyName: api
  localKeyID: 00 01 02 03 ...
Key Attributes: <No Attributes>
-----BEGIN ENCRYPTED PRIVATE KEY-----
...
-----END ENCRYPTED PRIVATE KEY-----

I tried creating different PEM files after converting the private key (so the certificate and key parts are in the same PEM file again), along with just specifying the cert and private keys after being converted but they return other errors.

# curl https://api.xxx.com --cert ./cert.crt --key ./rsa.key -v
NSS error -8191 (SEC_ERROR_LIBRARY_FAILURE)
curl: (35) security library failure

# curl https://api.xxx.com --cert ./cert.crt --key ./des3.key -v
NSS error -8177 (SEC_ERROR_BAD_PASSWORD)
curl: (58) Unable to load client key: Incorrect password

# curl https://api.xxx.com --cert ./rsa.pem
NSS error -8191 (SEC_ERROR_LIBRARY_FAILURE)
curl: (35) security library failure

# curl https://api.xxx.com --cert ./des3.pem
NSS error -8177 (SEC_ERROR_BAD_PASSWORD)
curl: (58) Unable to load client key: Incorrect password

What can be done to get this working?

how to troubleshoot remote push install of Symantec Endpoint Protection clients fails

for some clients the installation is successful but for others the deployment fails without any error or feedback returned.

sepfails

is there a way to consult any logs for debugging purposes ?

ps: By sniffing the traffic between the console and the target client we see that a TCP/445 session is successfully opened and that the two machines are communicating, which excludes the possibility of a firewall blocking the deploiment.

enter image description here

sql server – Backup TO URL WITH NOINIT fails because file already exists

There’s a similar question here but I cannot work out the solution because the circumstances are slightly different, so please bear with me; I’ve been wresting with the official help docs all day.

My goal is to have one backup file per database per day, with 15 minute recovery objective. I am scheduling agent jobs as follows:

  1. Once a day at 00:00, create a full backup of each database specified
  2. Every 15 minutes, create a transaction log backup of each database specified

Ignoring all the code to iterate through the required databases, this is the core logic of part 1 above:

    DECLARE @dbName nvarchar(50) = (SELECT DatabaseName FROM #DbsToBackup WHERE (Id = @current));
    DECLARE @dbNameAndDate nvarchar(200) = @dateString + '_' + @dbName;
    DECLARE @dbSpecificContainerUrl  nvarchar(MAX) = @containerUrl + @dbNameAndDate + '.bak';
    DECLARE @verifyError nvarchar(200) = N'Verify failed. Backup information for database ' + @dbName + ' not found.';

    BACKUP DATABASE @dbName 
         TO URL = @dbSpecificContainerUrl 
         WITH CREDENTIAL = @containerCredential
        ,NOINIT
        ,NAME = @dbNameAndDate
        ,COMPRESSION
        ,CHECKSUM
        ;

Then, the transaction log backup logic is as follows (same steps for grabbing the database name, date strings, etc.:

    BACKUP LOG @dbName 
         TO URL = @dbSpecificContainerUrl 
         WITH CREDENTIAL = @containerCredential
        ,NOINIT
        ,NAME = @dbNameAndDate
        ,SKIP
        ,COMPRESSION
        ,CHECKSUM
    ;

Part 1 works on the first attempt, but part 2 (and any subsequent re-run of part 1) immediately fails with this:

A nonrecoverable I/O error occurred on file “https://…..Test.bak:”
The file https://….Test.bak exists on the remote endpoint, and WITH
FORMAT was not specified. Backup cannot proceed.. (SQLSTATE 42000)
(Error 3271) BACKUP LOG is terminating abnormally. The step failed.

I get that the .bak file is already there, but I thought that NOINIT forced an append which would add the transaction log data into the existing .bak file.

I’m probably missing something simple but can somebody please advise?

Time Machine fails since enabling FileVault on iMac with Fusion Drive

I have an iMac with a Fusion Drive, and Time Machine configured to back up to an external USB disk.

Everything used to work well, but then I enabled FileVault for the internal Fusion Drive. Since then, Time Machine has not been able to back up:

enter image description here

Clicking the red info button shows:

enter image description here

I left it for multiple days, rebooted the computer, reset NVRAM, did First Aid using Disk Utility for both the external disk and the filesystem in it, all to no avail. This is on macOS Big Sur 11.3.1 (20E241).

diskutil cs list shows that the encryption is complete.

kartick@iMac ~ % diskutil cs list
CoreStorage logical volume groups (1 found)
|
+-- Logical Volume Group B2E1C2E9-637C-4BAB-897B-ABA771347B48
    =========================================================
    Name:         Time Machine Backup
    Status:       Online
    Size:         5000637104128 B (5.0 TB)
    Free Space:   0 B (0 B)
    |
    +-< Physical Volume 91CE72E6-ACB1-4B90-8DA2-FFE4365D4A4F
    |   ----------------------------------------------------
    |   Index:    0
    |   Disk:     disk3s2
    |   Status:   Online
    |   Size:     5000637104128 B (5.0 TB)
    |
    +-> Logical Volume Family 957C53B4-DB6F-4E1F-B743-286CAA1E75A6
        ----------------------------------------------------------
        Encryption Type:         AES-XTS
        Encryption Status:       Unlocked
        Conversion Status:       Complete
        High Level Queries:      Fully Secure
        |                        Passphrase Required
        |                        Accepts New Users
        |                        Has Visible Users
        |                        Has Volume Key
        |
        +-> Logical Volume 0A5A21A4-D46E-4FE8-9663-3AA7EE5EA68E
            ---------------------------------------------------
            Disk:                  disk4
            Status:                Online
            Size (Total):          5000284778496 B (5.0 TB)
            Revertible:            No
            LV Name:               Time Machine Backup
            Volume Name:           Time Machine Backup
            Content Hint:          Apple_HFS

How do I get it to work?