docker – How should I authorize calls to an API behind an Apache reverse proxy behind CAS authentication?

Apologies if this is incoherent. I’m very new.

I have an Apache server protected by CAS in a Docker container. I’m using mod_auth_cas to do this. I have an API running on a different container which is accessed through a reverse proxy using ProxyPass so that the user must be authorized to make API calls. I now want to know the UID in my API so that I can make sure that the user has permissions.

I’m hoping that there’s a way to add an additional parameter with the verified UID to incoming API calls. I feel like there should be some way to do this with mod_rewrite, but I’m not sure how. I suppose I’d have to get the UID as a string.

I built a custom .NET Core distributed system with Docker, React, Consul and RabbitMQ. Would anyone


y u no do it?

Advertise virtually anything here, with CPM banner ads, CPM email ads and CPC contextual links. You can target relevant areas of the site and show ads based on geographical location of the user if you wish.

Starts at just $1 per CPM or $0.10 per CPC.

apparmor – HOW to customize docker container profile to implement fine-grained network access control


apparmor policy reference profile

#include <tunables/global>profile docker-test flags=(attach_disconnected,mediate_deleted) {

#include <abstractions/base>
deny /data/** rwl,

deny /usr/bin/top mrwklx,

deny /usr/bin/hello mrwklx,

deny network,



deny network inet tcp,

deny network bind inet tcp src dst,
} error

syntax error, unexpected TOK_ID, expecting TOK_END_OF_RULE

the error comes from the last line which contains specific ip_addr, I test it on ubuntu18.04 and my kernel version is 5.4.0-42-generic, apparmor version is 3.0.1 which I compiled from source.

apache http server – Docker: httpd starts before volume is mounted?

I have a simple docker with apache2 installed (with a2enmod cgid) with CMD being:


I run container with:

docker run --name app1 -p 8080:80 -v "C:storeapp1www":/app1/www -d app1:1.3

The problem I have is… if the container is stopped, then restarted (via the GUI docker desktop), apache goes into a state where it returns 503 for everything until it is restarted with apachectl restart.

I have no idea why, but I suspect that it is related to the volume not finishing mounting properly before the CMD is executed?

Is there something basic I am not understanding about when -v would complete compared to when CMD is run? The apache2 log file just says this for every request, even after volume appears to be mounted ok:

(cgid:error) No such file or directory: (client AH02833: ScriptSock /var/run/apache2/cgisock.9 does not exist: /bcon/www/index.cgi

I have resorted to doing this CMD, but I feel like I am missunderstanding something important about docker:

sleep 5 && apachectl -D FOREGROUND

PHP and Nginx on docker, curl get Connection refused in php container

I am working in a local environment with docker.
I have an nginx web container and a php container which are in the same network.

I build the php container from my own dockerfile (with phpfpm and phpcli); and, the nginx I compose it in a docker-compose from the nginx:stable hub image.

I have 2 projects: a symfony(http://i-r4y.kaiza.lh/) and a drupal(http://i-z4r4.kaiza.lh/) which runs in it. and the symfony exposes an api which have to be consumed by the drupal. The problem is that an error when I call the symfony from the drupal cURL error 7: Failed to connect to i-r4y.kaiza.lh port 80: Connection refused

I thought it was a configuration of the symfony side api route; like it must be public or accept CORS etc …

but in the php container, when I do curl either the symfony or drupal url, I have the same error.

app@kz-php74:/var/www$ curl http://i-r4y.kaiza.lh
curl: (7) Failed to connect to i-r4y.kaiza.lh port 80: Connection refused
app@kz-php74:/var/www$ curl http://i-z4r4.kaiza.lh
curl: (7) Failed to connect to i-z4r4.kaiza.lh port 80: Connection refused

I checked in the php container that the hosts are present in /etc/hosts

app@kz-php74:/var/www$ cat /etc/hosts | grep i-   i-r4y.kaiza.lh   i-z4r4.kaiza.lh

Here is the docker-compose.yml :

version: '2.4'

      context: ../../../dockerfile
      dockerfile: Dockerfile.php
        PHP_VERSION: 7.4
    container_name: "kz-php74"
    hostname: "kz-php74"
    user: 1000:1000
    working_dir: /var/www
      - "${LOCAL_PATH}/../www:/var/www"
      - "i-r4y.kaiza.lh:"
      - "i-z4r4.kaiza.lh:"
      - kz_local

    container_name: kz-mysql
    image: mariadb:10.4.0
      - ${LOCAL_PATH}/.data/mariadb:/var/lib/mysql
      - ${LOCAL_PATH}/config/mariadb/conf.d/custom.cnf:/etc/mysql/conf.d/custom.cnf
      - ${LOCAL_PATH}/../www:/var/www
      - ${MYSQL_PORT:-3306}:3306
      MYSQL_ROOT_PASSWORD: password
      - kz_local

    image: nginx:stable
    container_name: kz-web
      - ${LOCAL_PATH}/config/nginx/conf.d:/etc/nginx/conf.d
      - ${LOCAL_PATH}/../www:/var/www
      - 80:80
      - kz_local

    external: true

The nginx config of drupal:

server {
    listen 80;
    listen (::):80;
    server_name i-z4r4.kaiza.lh;

    root /var/www/i-z4r4/web;

    resolver ipv6=off;
    location @rewrite {
        rewrite ^/(.*)$ /index.php?q=$1;

    # In Drupal 8, we must also match new paths where the '.php' appears in
    # the middle, such as update.php/selection. The rule we use is strict,
    # and only allows this pattern with the update.php front controller.
    # This allows legacy path aliases in the form of
    # blog/index.php/legacy-path to continue to route to Drupal nodes. If
    # you do not have any paths like that, then you might prefer to use a
    # laxer rule, such as:
    #   location ~ .php(/|$) {
    # The laxer rule will continue to work if Drupal uses this new URL
    # pattern with front controllers other than update.php in a future
    # release.
    location ~ '.php$|^/update.php' {
        set $fastcgi_pass "kz-php74:9000";

        fastcgi_split_path_info ^(.+?.php)(|/.*)$;
        # Security note: If you're running a version of PHP older than the
        # latest 5.3, you should have "cgi.fix_pathinfo = 0;" in php.ini.
        # See for details.
        include fastcgi_params;
        # Block httpoxy attacks. See
        fastcgi_param HTTP_PROXY "";
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param QUERY_STRING $query_string;
        fastcgi_intercept_errors on;
        fastcgi_pass $fastcgi_pass;



For symfony:

server {
    listen 80;
    listen (::):80;
    server_name i-r4y.kaiza.lh;

    root /var/www/i-r4y/public;

    resolver ipv6=off;

    location / {
        # try to serve file directly, fallback to index.php
        try_files $uri /index.php$is_args$args;

    location ~ ^/index.php(/|$) {
        set $fastcgi_pass "kz-php74:9000";

        fastcgi_pass $fastcgi_pass;
        fastcgi_split_path_info ^(.+.php)(/.*)$;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param HTTPS on;

    location @rewriteapp {
        rewrite ^(.*)$ /app.php/$1 last;

will anyone have any idea why this is not working?


apache – Docker Image not having correct permissions for www-data in WordPress

So I have a docker container running Apache/WordPress which runs Apache as www-data. I build the image using a Dockerfile and started it. I then copied my local uploads and plugin folders using:

docker cp ../wp-content/uploads esaanz_dev:/var/www/html/wp-content

When I access the machine and I try to install a plugin, always gives me Installation failed: Could not create directory. Furthermore, my backup plugin cannot wrote to wp-content/updraft to create backups.

I have logged into the machine using docker exec and I have run this command:

chown -R www-data:www-data wp-content

The problem persists even though www-data has access to everything:

Permissions for www-data

I am going mad figuring why WordPress is not able to write if it’s running under www-data and this user/group can write to anything in the container.

Any pointers are greatly appreciated.

linux – Docker host networking mode: how to expose ports only to other containers

I’m having a situation where I have to use the host networking mode for a container because this container has to expose a big number of ports (in the range of thousands) for streaming connections to clients. Using the normal networking mode would make the container initialization very slow because of Docker’s proxy/NAT.

However I now face a different issue. There’s also one port on this container that is used to communicate with a different container in the same machine, so ideally I should not expose this port to all interfaces.

The problem is that if I listen in for this port, then the other container can’t communicate with it when I try to use the special host.docker.internal hostname. It works when I listen on, however when I do this the port can be accessed from the outside.

Is there a way to use the host networking mode and open a port only accessible to other containers?

Script con variables que ejecute Docker

Soy estudiante de ciclo superior de Informatica.

Quisiera saber si es factible crear un script (con variables de entorno) que sea capaz de crear un docker (wordpress) personalizado.

Me explico:

Tener un script con unos datos tipo nombre, BBDD, user, passw y este sea capaz de crear un contenedor de Docker con esos datos.

Que os parece?


containers – GCSFuse, Docker and Apple Silicon

I just got a Mac Mini with the new M1 chip to use as a dev machine. My app uses gcsfuse.

When I attempt to install gcsfuse within the Debian stretch based container using “apt-get install gcsfuse-stretch”, I get “Unable to locate package gcsfuse-stretch”.

This is the same workflow I use to install gcsfuse on the same Debian stretch based container on my older Mac laptop.

The only difference that I can see that the ‘arch’ command inside the container on the older latop returns ‘x86_64’ while ‘arch’ return ‘aarch64’ on the new Mac Mini.

My question: Is it possible to install and run gcsfuse on a container hosted on Apple silicon? Or do I need to wait for a new release of gcsfuse that supports this?

docker – Messaging specific microservice

We have a system where we will be scaling a docker container programatically using the Docker API and assigning each instance a unique name ie inst0001..inst9999. We could have thousands of these instances.

Another manager container would do the scaling and keep track of all instances. What we would like to know is how we could communicate with a specific instance. We need to do asynchronous communication. Do we use a message broker? We wouldn’t really want a queue for each instance.