ssl – Nginx Load Balancing HTTPs cluster

I watn to use Nginx as load balancer for Consul cluster. The Consul cluster is reachable only with TLS.

Here I’ve tried to reverse proxy a single Consul server to check if the TLS certificates are working

server {
    listen 80;
    listen (::):80;
    
    location  /consul/ {

        resolver 127.0.0.1;

        proxy_pass https://core-consul-server-1-dev.company.io:8500;

        sub_filter_types text/css application/javascript;
        sub_filter_once off;
        sub_filter /v1/ /consul_v1/;

        proxy_ssl_certificate      /etc/nginx/certs/agent.crt;
        proxy_ssl_certificate_key  /etc/nginx/certs/agent.key;
        proxy_ssl_trusted_certificate  /etc/nginx/certs/ca.crt;
        proxy_ssl_verify        on;
        proxy_ssl_verify_depth  4;      

    }
}

this configuration working fine and I can call it with

curl http://core-proxy-server-1-dev.company.io/consul/consul_v1/agent/members

Now I’ve tried to do an upstream like this:

upstream consul {
    server core-consul-server-1-dev.company.io:8500;
    server core-consul-server-2-dev.company.io:8500;
}

server {

    listen 80;
    listen (::):80;
  
    
    location  /consul/ {

        resolver 127.0.0.1;

        proxy_pass https://consul;
        sub_filter_types text/css application/javascript;
        sub_filter_once off;
        sub_filter /v1/ /consul_v1/;
        
        proxy_ssl_certificate      /etc/nginx/certs/agent.crt;
        proxy_ssl_certificate_key  /etc/nginx/certs/agent.key;
        proxy_ssl_trusted_certificate  /etc/nginx/certs/ca.crt;
        proxy_ssl_verify        on;
        proxy_ssl_verify_depth  4;      

    } 
}

when calling the same curl command as before, I get the following error:

2021/04/20 08:38:59 (debug) 3364#3364: *1 X509_check_host(): no match
2021/04/20 08:38:59 (error) 3364#3364: *1 upstream SSL certificate does not match "consul" while SSL handshaking to upstream, client: 10.10.xx.xxx, server: , request: "GET /consul/consul_v1/agent/members HTTP/1.1", upstream: "https://10.10.yy.yyy:8500/consul/consul_v1/agent/members", host: "core-proxy-server-1-dev.company.io"

Then I’ve tried like this:

upstream consul_1 {
    server core-consul-server-1-dev.company.io:8500;
}

upstream consul_2 {
    server core-consul-server-2-dev.company.io:8500;
}

map $http_host $backend {
    core-consul-server-1-dev.company.io       consul_1;
    core-consul-server-2-dev.company.io       consul_2;

}

server {

    listen 80;
    listen (::):80;
  
    
    location  /consul/ {

        resolver 127.0.0.1;

        proxy_pass https://$backend;
        sub_filter_types text/css application/javascript;
        sub_filter_once off;
        sub_filter /v1/ /consul_v1/;
        
        proxy_ssl_certificate      /etc/nginx/certs/agent.crt;
        proxy_ssl_certificate_key  /etc/nginx/certs/agent.key;
        proxy_ssl_trusted_certificate  /etc/nginx/certs/ca.crt;
        proxy_ssl_verify        on;
        proxy_ssl_verify_depth  4;      

    }

}

but also no luck;

2021/04/20 08:45:05 (error) 3588#3588: *1 invalid URL prefix in "https://", client: 10.10.xx.xxx, server: , request: "GET /consul/consul_v1/agent/members HTTP/1.1", host: "core-proxy-server-1-dev.company.io"

any ideas? can someone please help me with one?

Nginx + Gunicorn + Flask not serving static files

I am new to Nginx and Gunicorn….

I am trying to serve flask app on certain prefix….

ex: https://myweb.com/flask/prefix/

everything works fine except it is not loading static files……

my nginx site configuration looks like below

location /flask/prefix/ {
        include proxy_params;
        proxy_pass http://unix:/home/user/flask_config/flask_socket_file.sock:/;        
    }

when I checked the network section by using Firefox developer tool I found that it is loading home page path / for static files instead of this /flask/prefix….

Example:

/static/image.png (i.e https://myweb.com/static/image.png)

but it suppose to be /flask/prefix/static/image.png (i.e https://myweb.com/flask/prefix/static/image.png).

However I tried to remove :/ at the end of proxy_pass statement… it ended with 501 error….

Please let me know what I am doing wrong….

I followed steps to configure Flask app with Nginx from Here

ssl – TLSv1 on Nginx running on Centos 7

toss an advice or two here if u can. This this is bumming me out. So, I got this shitty piece of electronic (h/w) that can communicate only on TLSv1. I already got the app listening on https but TLSv1.3. I cant get TLSv1 to work, lost two days on this one. Here are my configs so If u notice something, please type it back. I saw some other dude here also had the similar deal but to no avail in my case. I got two configs which I find important in this case (correct me f Im wrong); they are nginx virtual block config and letsencrypt. Thnx anyone for a try in advance.

website config:

**#upstream php-upstream {
#    server 127.0.0.1:9000; # NGINX Unit backend address for index.php with
#} server {

    listen 443 ssl default_server; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/xxxxxxx/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/xxxxxxx/privkey.pem; # managed by Certbot


    ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
    # seclevel for TLS 1.0 and 1.1
    #ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:@SECLEVEL=1";

    #ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    #ssl_prefer_server_ciphers on;
    #ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!RC4:!3DES:!MD5:!EXP:!PSK:!SRP:!SEED:!DSS:!CAMELLIA;


    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    server_name xx.xx.com;
    root /var/www/dashboard-backend/public;
    index index.php index.html index.htm;

    location / {
         try_files $uri $uri/ /index.php$is_args$args;
    }

    location ~ .php$ {
        try_files $uri /index.php =404;
        fastcgi_pass php-upstream;
        fastcgi_index index.php;
        fastcgi_buffers 16 16k;
        fastcgi_buffer_size 32k;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        #fixes timeouts problems
        fastcgi_read_timeout 600;
        include fastcgi_params;
    }

# location ~ /.ht {
    #     deny all;
    # }

    # location /.well-known/acme-challenge/ {
    #     root /var/www/letsencrypt/;
    #     log_not_found off;
    # }

     error_log /backup/nginx/laravel/laravel.error.log;
     access_log /backup/nginx/laravel/laravel.access.log;

     ## Added fot testing purposes
    fastcgi_pass_header Set-Cookie;
    fastcgi_pass_header Cookie;
    fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_split_path_info ^(.+.php)(/.+)$;
    fastcgi_param  PATH_INFO $fastcgi_path_info;
    fastcgi_param  PATH_TRANSLATED    $document_root$fastcgi_path_info;
    fastcgi_intercept_errors on;
    include fastcgi_params;
    proxy_http_version 1.1;
    proxy_set_header Connection "";

server {
    if ($host = xx.xx.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot



    listen 80;
    listen (::):80;
    server_name xx.xx.com;
    return 404; # managed by Certbot


}**

and here is the letsencrypt config:

**# This file contains important security parameters. If you modify this file
# manually, Certbot will be unable to automatically provide future security
# updates. Instead, Certbot will print and log an error message with a path to
# the up-to-date file that you will need to refer to when manually updating
# this file.

ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
#ssl_session_tickets off;

ssl_protocols  TLSv1 TLSv1.1;
ssl_prefer_server_ciphers off;

ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DH$**

Nginx proxy 302 redirect location header is wrong

I have a problem with getting Nginx proxy to work with Mautic instance, the website works like that(I swapped all urls to https://mautic.example):

When you visit https://mautic.example, you get redirected to /s/login. After logging in, browsers sends POST to /s/login_check, gets 302 found and location header, then browser sends GET to location header URL.
In original server the location header is https://mautic.example/s/dashboard. On proxy it’s /s/login which does nothing else but refreshes the page because the user is already on /s/login.

I want simplest proxy possible.

Here is configuration I tried:

server {
    listen 8091;

    location / {
       proxy_pass https://mautic.example;
   
       proxy_http_version 1.1;
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "Upgrade";
     }

}

What am I missing here?

ubuntu – cookie is lost on refresh using nginx as proxy_reverse. I like the cookie and would like to keep it set in the browser

I’m new to Nginx and ubuntu – have been with windows server for over a decade and this is my first try to use ubuntu and Nginx so feel free to correct any wrong assumption I write here 🙂

my setup: I have an expressjs app (node app) running as an upstream server. I have front app – built in svelte- access the expressjs/node app through Nginx proxy_reverse. Both ends are using letsencrypt and cors are set as you will see shortly.

When I run front and back apps on localhost, I’m able to login, set two cookies to the browser and all endpoints perform as expected.

When I deployed the apps I ran into weird issue. The cookies are lost once I refresh the login page. Added few flags to my server block but no go.

I’m sure there is a way – I usually find a way – but this issue really beyond my limited knowledge about Nginx and proxy_reverse setup. I’m sure it is easy for some of you but not me. I hope one of you with enough knowledge point me in the right direction or have explanation to how to fix it.

Here is the issue:
my front is available at travelmoodonline.com. Click on login. Username : mongo@mongo.com and password is 123.
inspect dev tools network. Header and response are all set correctly. Check the cookies tab under network once you login and you will get two cookies, one accesstoken and one refreshtoken.

Refresh the page. Poof. Tokens are gone. I no longer know anything about the user. stateless.

In localhost, I refresh and the cookies still there once I set them. In Nginx as proxy, I’m not sure what happens.

So my question is : How to fix it so cookies are set and sent with every req? Why the cookies disappear? Is it still there in memory somewhere? Is the path wrong? Or the cockies are deleted once I leave the page so if I redirect the user after login to another page, the cookies are not showing in dev tools.

My code :
node/expressjs server route code to login user:

app.post('/login',  (req, res)=>{
   //get form data and create cookies
   res.cookie("accesstoken", accessToken, { sameSite: 'none', secure : true });  
   res.cookie("refreshtoken", refreshtoken, { sameSite: 'none', secure : true }).json({ 
   "loginStatus": true, "loginMessage": "vavoom : doc._id })      

 }

Frontend – svelte – fetch route with a form to collect username and password and submit it to server:

    function loginform(event){
  username = event.target.username.value;
  passwordvalue = event.target.password.value;

  console.log("event username: ", username);
  console.log("event password : ", passwordvalue);

  async function asyncit (){
   
  let response = await fetch('https://www.foodmoodonline.com/login',{
  method: 'POST',
  origin : 'https://www.travelmoodonline.com',
  credentials : 'include',
  headers: {
  'Accept': 'application/json',
  'Content-type' : 'application/json'
  },
  body: JSON.stringify({
  //username and password
  })

  }) //fetch

Now my Nginx server blocks :

# Default server configuration
#
server {
    
    listen 80 default_server;
    listen (::):80 default_server;  

    root /var/www/defaultdir;
    index index.html index.htm index.nginx-debian.html;

    server_name _; 
    location / {
        try_files $uri $uri/ /index.html;
    }

   }



#  port 80 with www

server {
    listen 80;
    listen (::):80;


    server_name www.travelmoodonline.com;

    root /var/www/travelmoodonline.com;

    index index.html;

    location / {
        try_files $uri $uri/ /index.html;
    }

    return 308 https://www.travelmoodonline.com$request_uri; 

}

#  port 80 without wwww
server {
    listen 80;
    listen (::):80;

    server_name travelmoodonline.com;

    root /var/www/travelmoodonline.com;
 
    index index.html;

    location / {
        try_files $uri $uri/ /index.html;
    }

    return 308 https://www.travelmoodonline.com$request_uri;
}



# HTTPS server (with www) port 443 with www

server {
    listen 443 ssl;
    listen (::):443 ssl;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    server_name www.travelmoodonline.com;    
    root /var/www/travelmoodonline.com;
    index index.html;    
    
    
    
    ssl_certificate /etc/letsencrypt/live/travelmoodonline.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/travelmoodonline.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    location / {
        try_files $uri $uri/ /index.html;       
    }
    

}


# HTTPS server (without www) 
server {
    listen 443 ssl;
    listen (::):443 ssl;
     add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    server_name travelmoodonline.com;
    root /var/www/travelmoodonline.com;
    index index.html;
   

    location / {
        try_files $uri $uri/ /index.html;       
    }
    
    ssl_certificate /etc/letsencrypt/live/travelmoodonline.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/travelmoodonline.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    
   }






server {

    server_name foodmoodonline.com www.foodmoodonline.com;

#   localhost settings
    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

    
    #    proxy_cookie_path / "/; secure; HttpOnly; SameSite=strict";
    #   proxy_pass_header  localhost;

    #    proxy_pass_header Set-Cookie;
    #    proxy_cookie_domain localhost $host;
    #   proxy_cookie_path /; 

    }

    listen (::):443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/foodmoodonline.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/foodmoodonline.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

server {
    if ($host = www.foodmoodonline.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    if ($host = foodmoodonline.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    listen 80;
    listen (::):80;
    server_name foodmoodonline.com www.foodmoodonline.com;
    return 404; # managed by Certbot

}

I tried 301-302-307 and 308 after reading about some of them covers the GET and not POST but didn’t change the behavior I described above. Why the cookie doesn’t set/stay in the browser once it shows in the dev tools. Should I use rewrite instead of redirect???? I’m lost.

Not sure is it nginx proxy_reverse settings I’m not aware of or is it server block settings or the ssl redirect causing the browser to loose the cookies but once you set the cookie, the browser suppose to send it with each req. What is going on here?

Thank you for reading.

debug – WordPress response getting truncated (PHP-FPM + NGINX)

We have a website that used to work but is now giving a blank page when we try to edit a post. On debugging we found that the response sent back to the browser from the wp-admin/post.php endpoint is getting truncated. Specifically here is tail of the response showing the truncation. And the tail can be traced to code in edit-form-blocks.php

Here are details of our setup:

  • we have a Dockerized setup consisting of a container based on wordpress:php7.4-fpm-alpine image that runs PHP-FPM
  • and a NGINX webserver based on nginx:1.17 image that forwards requests to PHP-FPM
  • WordPress version we are using is 5.5.1

we use the default settings that come with the Docker images. We have tried the solutions in below articles with no luck:

NGINX has access to all the cache folders. we checked and both access log /var/log/nginx/access.log as well as error log /var/log/nginx/error.log of NGINX are empty. NGINX caches are also empty. There is no error in the logs of either NGINX or WordPress or PHP-FPM. List of all the logs we have checked:

NGINX:

  • docker logs nginx-container
  • /var/log/nginx/access.log
  • /var/log/nginx/error.log

PHP-FPM:

WordPress:

  • /var/www/html/wp-content/debug.log

The response we get back is 153147 bytes in size.

ubuntu – Nginx Directory Index is Forbidden

I Have Laravel Rest Api for mobile app running under ubuntu – nginx and every thing is working just fine till today, woke up and users can’t access the api and I check nginx error log and found below

2021/04/18 01:21:52 (error) 2772#2772: *138808 directory index of "/var/www/html/mydomain/public/" is forbidden, client: 9x.1x.1x.5x, server: mydomain.com, request: "GET / HTTP/1.>
2021/04/17 23:16:01 (error) 2772#2772: *138792 directory index of "/var/www/html/mydomain/public/" is forbidden, client: 4x.15x.20x.2x1, server: mydomain.com, request: "GET /?XDEBUG>

this is my Nginx config :

server {

    
    root /var/www/html/mydomain/public;

    # Add index.php to the list if you are using PHP
    index index.php;

    server_name mydomain.com www.mydomain.com;

    location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ /index.php?$query_string;
    }

    # pass PHP scripts to FastCGI server
    #
    location ~ .php$ {
        include snippets/fastcgi-php.conf;
    #
    #   # With php-fpm (or other unix sockets):
        fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
    #   # With php-cgi (or other tcp sockets):
    #   fastcgi_pass 127.0.0.1:9000;
    }

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    location ~ /.ht {
        deny all;
    }

    listen (::):443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot


}

server {
    if ($host = www.mydomain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    if ($host = mydomain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    listen 80;
    listen (::):80;

    server_name mydomain.com www.mydomain.com;
    return 404; # managed by Certbot

No one changed any thing on the server side and it was working, what is the issue here

Appreciate any help and ideas this is a live project

Add site-specific redirects in nginx on Heroku

I’m using Heroku to host a PHP-based application. I’m onboarding a website so need to set up 301 redirects between the old site’s URL structure and the new URL structure.

Reading Heroku’s docs, it says:

Nginx uses a server that responds to all hostnames. The document root has no access limitations.

My current nginx.conf file looks like this:

location / {
    try_files $uri @rewriteapp;
}

location @rewriteapp {
    add_header "X-Frame-Options" "deny";
    add_header "X-XSS-Protection" "1; mode=block";

    rewrite ^(.*)$ /index.php$1 last;
}

How would I go about including 301 redirects for specifically the hostname of the website I’m importing? I’d want those to be checked first and if no rules match, carry on with the rules I already to pass requests to my application’s index.php file.

Can I nest server blocks within a top-level server block to achieve this?

tls – Can a machine running a packet sniffer see what nginx is forwarding on localhost to a Flask app?

I want to serve a Flask application from my pc. Other machines in my network only should be able to consume the API. However, I wish to have the communication between the other machines and the API secured using https with a self-signed certificate. For this reason (because serving Flask with waitress does not support https on its own), I am using nginx on the same machine as a proxy so that it can handle https.

My question is:
If someone connects to my network, let’s say via wifi, and runs a packet sniffer like Wireshark, will they be able to see what is being transferred between the legitimate clients of the app and the app?

When running Wireshark on the same machine as the application, I see the request and all of its contents. I believe this is because it is sniffing on localhost and sees the forwarded http request (from nginx to the app). When running Wireshark on my laptop, I don’t see the http request. Can someone confirm this is safe for my purposes?

Also: Can someone confirm that if nginx were to run on a separate local machine, then the http request would be exposed again?

EDIT: Here is the nginx configuration I have

server {
    listen 443 ssl;

    ssl_certificate /etc/nginx/sites-available/nginx-selfsigned.crt;
    ssl_certificate_key /etc/nginx/sites-available/nginx-selfsigned.key;

    server_name example.com;

    location / {

        proxy_pass http://127.0.0.1:5000;
        proxy_set_header X-Real-IP $remote_addr;


    }
}

server {
  listen 80;

  server_name 192.168.1.5;

  return 301 https://$server_name$request_uri;
}

nginx – WordPress multisite wp-admin to many redirects

I have a WordPress blog set up with the multisite feature.

there are two blogs setup on it with the following URLs

# Main blog home
https://example.com/blog/

# Main blog admin
https://example.com/blog/wp-admin/

# 2nd blog home
https://example.com/blog/2nd/

# 2nd blog admin
https://example.com/blog/2nd/wp-admin/

Following the nginx configuration, I’m using

server {
        listen 80;
        server_name example.com www.example.com;
        root /var/www/website/build;

        index index.html index.htm index.php;

        # Serve blog
        location /blog {
                return 301 /blog/;
        }

        location ^~ /blog/ {
                autoindex on;
                root /var/www/;
                index index.php index.html index.htm;

                try_files $uri $uri/ /blog/index.php?$args;

                location ~ .php$ {
                        include snippets/fastcgi-php.conf;
                        fastcgi_param  SCRIPT_FILENAME    $request_filename;
                        fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
                }
        }

        # Serve other files
        location / {
                try_files $uri $uri/ =404;
        }

        location ~ .php$ {
                include snippets/fastcgi-php.conf;
                fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
        }

        location ~ /.ht {
                deny all;
        }
}

With this setup, every above URL is working, except the 2nd blog admin panel.
On accessing https://example.com/blog/2nd/wp-admin/ it is giving ERR_TOO_MANY_REDIRECTS error.