Certificates in docker local registry

I’m trying to set up a docker local registry within my university network. Since they offer certificates from rediris I requested one, so I have now three different files:

  1. cert.pem
  2. intermediate.pem
  3. chain.pem

In addition to this, I kept my .key and .csr as well. Following the docker website example (https://docs.docker.com/registry/deploying/#get-a-certificate)

-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt

I’m not able to comprehend how concatenate/transform those pem files into the domain.crt file I need, all my tries led to the docker local registry treating the cert as self-signed.

Thank you very much in advance and am really sorry if this question is dumb, my knowledge on system administration is minimal.

Docker systemctl Failed to get D-Bus connection bug

When I execute any systemctl command inside a CentOS 7 container I get the Failed to get D-Bus connection: Operation not permitted error message. The container is started with docker container run --privileged -d -t -p 80:80 09fc90b6865e command. Yesterday this worked exactly like this and now docker is broken. All commands are executed as root.

debian – Docker on host have multi ethernet

We were running a docker container on a host that had an eth1 and an eth2 interface. I configured source based policy-based routing so everything was working fine with the software that was installed on the host level communicating over either IP. However, I can’t communicate with a Docker container over the eth2 (non default interface).

I’m using Debian 10 on host.

docker container start stop events

Is there a docker log of container start and stop events?
(I am not interested in docker logs command, as that will give me the containers stdout log.)

There is of course the “status” field of the docker ps or docker inspect commands. but they will only give me the latest status of the container. I am searching for a more extended record of start stop events of the containers.

linux – Docker Swarm service not deployed to worker nodes

I’m doing some fiddling on Docker Swarm to create a load balance for a Minecraft server.

I created a service that uses a mount type bind for the data, but when i create the service it is only available on the manager and not the worker node.

I tried re-joining the worker node to the swarm but that did nothing as well as remove any images the worked node had downloaded just in case they were old. Nothing i have done helps but if i run a different service like nginx, that does get deployed on the worker nodes.

This is what i am running

docker service create --name minecraft -p 19132:19132/udp --mount type=bind,src=/opt/minecraft,dst=/opt/minecraft repo/images:minecraft

Any idea why this is not working? I remember doing this before a couple months ago and it worked just fine but now that i am returning to this experiment, its not working.

penetration test – How to start pentesting/reverse engineering/cracking a software on Linux? (Docker based)

TL:DR; What are good learning resources for security testing a software which runs with Docker on Ubuntu.

I am in junior position at this company, and they figured it would be good if I just test their software from security perspective. I already learned a bit about hacking, but it was mainly webservers, CTFs, Tryhackme, HTB, so nothing connected to RE or cracking. I don’t know how to start, I mean, I found a lot of knowledge about RE on Windows, or CIS Docker Benchmark, but I didn’t find any articles, specifically about reverse engineering/cracking on Docker on Linux.

The product is running on Ubuntu 18.04 server, on Docker, installed from a .deb package (Don’t know if this helps 🙂 )
What I looking for is some guidance on how to learn about cracking a software which is installed with Docker on Linux. Or what is the most easier or usually more valuable attack vector to look at, I mean, maybe try to crack the licensing, or try to use buffer overflow, how the “average attacker” thinks… Please tell me if I am missing some basics, and it is never mind that I crack/pentest on windows or docker or linux, then I will just start some book or complete course.
I understand that it is a broader topic than just following a step by step tutorial, but I have plenty of time for it to learn, so videos, books, articles everything which teaches purposefully Docker/Linux software test would be awesome.

Also, what do you think, which of the following could help to aim in the right direction?

Found some books:
https://kalitut.com/Best-reverse-engineering-books/

This can be related, and it was already helpful:
Is it possible to escalate privileges and escaping from a Docker container?

Also I found Liveoverflow videos, some related to docker, should I start the whole series?
https://www.youtube.com/watch?v=cPGZMt4cJ0I&list=PLhixgUqwRTjxglIswKp9mpkfPNfHkzyeN&index=55&ab_channel=LiveOverflow

Thank you very much in advance.

php – Adding xmlwriter via docker file

I’m trying to install a plugin on my docker site that requires xmlwriter, this is the message:

“The official Amazon Web Services SDK requires PHP 5.5+ with SimpleXML and XMLWriter modules, and cURL 7.16.2+ compiled with OpenSSL and zlib. Your server currently has no XMLWriter PHP module.”

I have a docker file where I’m trying to enable it:

FROM xxx/php7-base:latest


ENV APP=www

ADD $APP /var/www/app

ADD config/supervisord.conf /etc/supervisord.conf
ADD config/nginx.conf /etc/nginx/nginx.conf



RUN cd /var/www/app && 
    composer install --no-interaction 

EXPOSE 443

ENTRYPOINT ["supervisord", "--nodaemon", "--configuration", "/etc/supervisord.conf"]

I’ve tried changing the run part to this:

RUN cd /var/www/app && 
    composer install --no-interaction 
    docker-php-ext-install xmlwriter

It hits the following errors when I try to build it:

Step 6/8 : RUN cd /var/www/app && composer install –no-interaction docker-php-ext-install xmlwriter
—> Running in c59e2e08a4b9
Invalid argument docker-php-ext-install xmlwriter. Use “composer require docker-php-ext-install xmlwriter” instead to add packages to your composer.json.
ERROR: Service ‘totm’ failed to build : The command ‘/bin/sh -c cd /var/www/app && composer install –no-interaction docker-php-ext-install xmlwriter’ returned a non-zero code: 1

Where am I going wrong?

How to prevent Docker from looking up host names via external DNS?

Say you have the following docker-compose file:

version: '3.5'

services:
  web:
    image: nginx
    expose:
      - 80

  # Imaginary service that requests http://web/
  curl:
    image: curlimages/curl
    command: curl -i http://web/
    

If web is down, accessing http://web/ will trigger an external DNS lookup. In my case, this caused several hundred thousand requests per hour to our DNS server.

How can I prevent Docker from externally looking up host names when a container is down?

docker – Security implications of granting non-root access to privileged ports (

Lots of solutions to this problem e.g. here and here but in order to decide which is best I’d need to know more about the security implications of each solution (or at least in general).

My context: I’m looking into running a rootless Docker/Podman Nginx container (on an Ubuntu Server 20.04 LTS host). Podman gives the following solution with this error message Error: rootlessport cannot expose privileged port 80, you can add 'net.ipv4.ip_unprivileged_port_start=80' to /etc/sysctl.conf (currently 1024) but reading around it doesn’t seem to me like a great solution because it’s giving access to all users.

Force kubernetes to use containerd when docker is installed

Kubelet is the process responsible for the on-the-Node container actions, and it has a set of command-line flags to tell it to use a remote container management provider (both containerd and cri-o are consumed the same way, AFAIK):

(Service)
ExecStart=/usr/local/bin/kubelet --container-runtime=remote --container-runtime-endpoint=unix:///var/run/dockershim.sock

(assuming your containerd is listening on the same dockershim.sock path)

The fine manual specifically says to ensure you don’t switch those flags with an existing Node registration, since it makes certain assumptions when creating the containers, so if you already have a Node that is using docker, ideally stop kubelet, blow away those containers, kubectl delete node $the_node_name and let kubelet re-register with the correct configuration