Is it possible to connect to MySQL which is in the Cloud Run container remotely?

Is it possible to connect to MySQL which is in the Cloud Run container remotely?


Details

Server 1 (slave server) - I have a Docker container (in Google Cloud Run) with MySQL-Server in it.
   |
   |
   |
Server 2 (main server) - I want to connect to MySQL via PHP.

Is it possible to do this?

P.s


I trying it to do, but I can not.

programming languages – key benefit of using container?

I’m not a guy who has computer science related background, just curious about the container technology and of course I did some googling to see what container is.

hope this is the correct site for container related question.

Here are my questions.

  1. what is the primary benefit that container tech. offer? packing the environment(libraries, configurations…etc) for codes/programs or offering great portability across different computer system?

(some guy tells me that the portability isn’t the key benefit but close to)

  1. is it doable and practical to use container provide services just like what servers can do?

  2. can containers isolate malicious codes/apps to prevent them from affecting underlying operating system?

Docker container with random date

I was running a container based on the image linuxserver/radarr:3.0.0.3095-ls12.
Once I updated the tag/version to linuxserver/radarr:3.0.0.3807-ls24 the application stopped working.

After debugging a little I noticed that date behaves weirdly in this image:

$ docker run --rm --entrypoint "" linuxserver/radarr:3.0.0.3807-ls24 date
Fri 20 Feb 1970 03:17:15 AM UTC
$ docker run --rm --entrypoint "" linuxserver/radarr:3.0.0.3807-ls24 date
Sun 01 Mar 1970 09:09:15 AM UTC
$ docker run --rm --entrypoint "" linuxserver/radarr:3.0.0.3807-ls24 date
Thu 19 Feb 1970 09:04:59 AM UTC

But the old doesn’t

$ docker run --rm --entrypoint "" linuxserver/radarr:3.0.0.3095-ls12 date
Sat 10 Oct 2020 12:15:09 AM UTC

After meditating for a while, assuming some kind of weird dark magic in the clock, decided to run it with --privileged for full/raw access

$ docker run --rm --entrypoint "" --privileged linuxserver/radarr:3.0.0.3807-ls24 date
Sat 10 Oct 2020 12:16:22 AM UTC

And it worked well (and so did the app, but not important to the question).

I have gone through docker history of both images but a lot of COPY and RUN curl that might have different results between builds. Still, I don’t think anyone (image maintainers) would want to mangle with the date, so it must be something out of their control (no libfaketime found)…

This is a multi-arch image and these results are from a raspberry Pi (so the arm build of the image). In my amd64 linux laptop, the latest image reports proper date even without privileged…

What could it be? How can I even start to debug this as I cannot use the --privileged flag?

c++ – Storage container for components of entities (ECS)

Overview
After playing a while with the ECS implementation of the Unity engine and liking it very much I decided to try recreate it as a challenge. As part of this challenge I need a way of storing the components grouped by entity, I solved this by creating an container called a “Chunk”

Unity uses archetypes to group components together and stores these components in pre-allocated chunks of fixed size.

I made a simple design of my implementation as clarification:

enter image description here

Here archetype is a linked list of chunks, the chunks contain arrays of all the components that make the archetype, in this case Comp1, Comp2 and Comp3. Once a chunk is full a new chunk is allocated and can be filled up and so on.

The chunk itself is implemented like this:

enter image description here

With this solution I can store the components grouped by entity while making optimal use of storage and cache because the components are tightly packed in an array. Because of the indirection provided by the array of indices I am able to delete any component and move the rest of the components down to make sure there aren’t any holes.

Questions
I have some items I like feedback on in order to improve myself

  • Is the code clear and concise.
  • Are there any obvious performance improvements.
  • Because this is my first somewhat deepdive in templates, are there any stl solutions I could’ve used
    that I have missed.

Code

  • chunk.h
    Contains the container.
#pragma once

#include "utils.h"
#include "entity.h"

#include <cstdint>
#include <tuple>

template<size_t Capacity, typename ...Components>
class chunk
{

public:
    struct index
    {
        uint16_t id;
        uint16_t index;
        uint16_t next;
    };

    chunk()
        :
        m_enqueue(Capacity - 1),
        m_dequeue(0),
        m_object_count(0)
    {
        static_assert((Capacity & (Capacity - 1)) == 0, "number should be power of 2");

        for (uint16_t i = 0; i < Capacity; i++)
        {
            m_indices(i).id = i;
            m_indices(i).next = i + 1;
        }
    }

    const uint16_t add()
    {
        index& index = m_indices(m_dequeue);
        m_dequeue = index.next;
        index.id += m_new_id;
        index.index = m_object_count++;

        return index.id;
    }

    void remove(uint16_t id)
    {
        index& index = m_indices(id & m_index_mask);
        
        tuple_utils<Components...>::tuple_array<Capacity, Components...>::remove_item(index.index, m_object_count, m_items);

        m_indices(id & m_index_mask).index = index.index;

        index.index = USHRT_MAX;
        m_indices(m_enqueue).next = id & m_index_mask;
        m_enqueue = id & m_index_mask;
    }

    template<typename... ComponentParams>
    constexpr void assign(uint16_t id, ComponentParams&... value)
    {
        static_assert(arg_types<Components...>::contain_args<ComponentParams...>::value, "Component type does not exist on entity");

        index& index = m_indices(id & m_index_mask);
        tuple_utils<Components...>::tuple_array<Capacity, ComponentParams...>::assign_item(index.index, m_object_count, m_items, value...);
    }

    template<typename T>
    constexpr T& get_component_data(uint16_t id)
    {
        static_assert(arg_types<Components...>::contain_type<T>::value, "Component type does not exist on entity");

        index& index = m_indices(id & m_index_mask);
        return std::get<T(Capacity)>(m_items)(index.index);
    }

    inline const bool contains(uint16_t id) const
    {
        const index& index = m_indices(id & m_index_mask);
        return index.id == id && index.index != USHRT_MAX;
    }

    inline const uint32_t get_count() const
    {
        return m_object_count;
    }

    static constexpr uint16_t get_capacity() 
    {
        return Capacity;
    }

private:
    static constexpr uint16_t m_index_mask = Capacity - 1;
    static constexpr uint16_t m_new_id = m_index_mask + 1;

    uint16_t m_enqueue;
    uint16_t m_dequeue;
    uint16_t m_object_count;
    index m_indices(Capacity) = {};
    std::tuple<Components(Capacity)...> m_items;
};
  • utils.h
    Contains utility functions for templates used by the chunk class.
// utils.h
#pragma once

#include <tuple>
#include <type_traits>
#include <algorithm>

// get total size of bytes from argumant pack
template<typename First, typename... Rest>
struct args_size
{
    static constexpr size_t value = args_size<First>::value + args_size<Rest...>::value;
};

template <typename T>
struct args_size<T>
{
    static constexpr size_t value = sizeof(T);
};

template<typename... Args>
struct arg_types
{
    //check if variadic template contains types of Args
    template<typename First, typename... Rest>
    struct contain_args
    {
        static constexpr bool value = std::disjunction<std::is_same<First, Args>...>::value ? 
            std::disjunction<std::is_same<First, Args>...>::value : 
            contain_args<Rest...>::value;
    };

    template <typename Last>
    struct contain_args<Last> 
    {
        static constexpr bool value = std::disjunction<std::is_same<Last, Args>...>::value;
    };

    //check if variadic template contains type of T
    template <typename T>
    struct contain_type : std::disjunction<std::is_same<T, Args>...> {};
};

template<typename... Args>
struct tuple_utils
{
    // general operations on arrays inside tuple
    template<size_t Size, typename First, typename... Rest>
    struct tuple_array
    {
        static constexpr void remove_item(size_t index, size_t count, std::tuple<Args(Size)...>& p_tuple)
        {
            First& item = std::get<First(Size)>(p_tuple)(index);
            item = std::get<First(Size)>(p_tuple)(--count);
            tuple_array<Size, Rest...>::remove_item(index, count, p_tuple);
        }

        static constexpr void assign_item(size_t index, size_t count, std::tuple<Args(Size)...>& p_tuple, const First& first, const Rest&... rest)
        {
            std::get<First(Size)>(p_tuple)(index) = first;
            tuple_array<Size, Rest...>::assign_item(index, count, p_tuple, rest...);
        }
    };

    template <size_t Size, typename Last>
    struct tuple_array<Size, Last>
    {
        static constexpr void remove_item(size_t index, size_t count, std::tuple<Args(Size)...>& p_tuple)
        {
            Last& item = std::get<Last(Size)>(p_tuple)(index);
            item = std::get<Last(Size)>(p_tuple)(--count);
        }

        static constexpr void assign_item(size_t index, size_t count, std::tuple<Args(Size)...>& p_tuple, const Last& last)
        {
            std::get<Last(Size)>(p_tuple)(index) = last;
        }
    };
};

Usage

    auto ch = new chunk<2 * 2, TestComponent1, TestComponent2>();
    auto id1 = ch->add();
    auto id2 = ch->add();
    auto contains = ch->contains(id1);

    ch->assign(id1, TestComponent2{ 5 });
    ch->assign(id2, TestComponent1{ 2 });

    ch->remove(id1);
```

magento2 – Custom module problem in Docker container

I am having an issue getting a custom module created with

https://cedcommerce.com/magento-2-module-creator/

to work properly inside of a Docker container.

I have tried containers from

https://github.com/alexcheng1982/docker-magento2

https://github.com/bitnami/bitnami-docker-magento

Both install and operate fine and when I include my module files by creating a volume I can install it just fine. The problem occurs when I add a new entry through the admin panel. I have this same module running locally on AMPPS server and it works as expected.

When deployed in default mode I get this display:

This is normal when run locally:

Here is the output of the errors produced when in developer mode:

2 exception(s):
Exception #0 (MagentoFrameworkExceptionLocalizedException): Invalid block type: TattcomTestTestModuleNameBlockAdminhtmlTestmodelEditForm
Exception #1 (ReflectionException): Class TattcomTestTestModuleNameBlockAdminhtmlTestmodelEditForm does not exist

Any help is appreciated.

Environment variables are different outside and inside a docker container

I am going to be working with some environment variables in ubuntu.
I wrote a trivial script such as

import os

username= os.environ.get('A_USER_NAME',None)
print(username)

now, running the command printenv from the terminal I can see that the variable A_USER_NAME is set to some value and running the above script that value gets printed.

However running a docker container and inside that docker container, running the script prints nothing and of course I can see that through printenv the variable is not defined.

I suppose then that when building this container I should have defined this variable? Perhaps in the dockerfile?

What is the correct way of doing this?

Right now I am going to set the variable manually inside the container but I would like to be reminded of how to do this properly

object oriented – Should I make a class for my container that is basically std::vector>?

I need to use std::vector<std::variant<my_types>> where using my_types = std::variant<Type1, Type2, ... TypeN> for storing a known amount of data types inside the same std::vector. However printing, accessing and other types of operation with std::variant is rather tricky so I will surely need functions for all of these operations.

Questions:

  1. Should I encapsulate std::vector<std::variant<my_types>> inside of a template class along with functions for processing it or I will be better off just writing template functions without creating a class?
  2. If writing a class is better, is it a good approach making it inherited from std::vector in this particular case? (I’m aware that inheriting from containers is a bad practice)
  3. If writing a class is better how can I prevent the compiler from creating a class with std::vector that does not hold std::variant as its type?
  4. Same as (3) but if the functional programming approach is better.

Safety of PostgresQL run in Docker container

Postgres is available as a docker image on dockerhub/postgres. Configuration is easily done via environment variables but they also state

Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.

That makes me wonder if there is a safety risk when having a Dockerfile like this

FROM postgres
COPY *.sql /docker-entrypoint-initdb.d/

and a docker-compose.yml like this

    database:
        build:
            context: ./database/
            dockerfile: ./Dockerfile
        environment:
            POSTGRES_PASSWORD: test123
        ports:
            - 5432:5432
        volumes:
            - databasevol:/var/lib/postgresql/data
        networks:
            - net

volumes:
    databasevol:
        driver: local

networks:
    net:
        driver: bridge

which will rebuild the current database image based on postgres:latest which may lead to the database being initialised by version 12.4 but later being read by version 13.0. The possibility of having the DBMS identify the data directory as corrupted and purging it horrifies me.

Hence the question, is it safe to run databases in docker containers?

fedora – Seems like firewalld is not honouring rich rules, to give docker container outside connection aside from ping

I’m on Fedora 32 5.7.16-200.fc32.x86_64,
with the package firewalld: firewalld-0.8.3-1.fc32.noarch,
and my Docker containers (all of them, with every image)
don’t have internet access by default, or any outside
connection aside from ping, for that matter.
( for example, I can ping by IP, but not by domain,
because I can’t reach the DNS server with a request )

I knew absolutely nothing about firewalls and firewalld
in particular before this, but I’ve been reading about it,
trying to understand the problem, and find the solution.

Besides the official firewalld documentation, I’ve been
reading about incompatibilities between Docker and firewalld,
and a bunch of other things. I also know that,
exists podman as a Docker alternative.

But for me, this is not about: ‘make it work, and DONE’;
this is about understanding, as much as I can,
WHY works when it does, and
WHY does’t work when it doesn’t.

After learning about firewalld, for me, seems correct that,
by default, the situation be like the one described above.

Now, I want to change that: I want to be able to ping
by domain, for example. When being blocked, logs from
the ping by domain says:

FINAL_REJECT: IN="$CONTAINER_INTERFACE" OUT=wlp3s0 PHYSIN=vethb53e882 MAC=XX:....:XX SRC="http://serverfault.com/$CONTAINER_IP" DST=1.1.1.1 LEN=56 TOS=0x00 PREC=0x00 TTL=63 ID=37255 DF PROTO=UDP SPT=57463 DPT=53 LEN=36

I tried different things; some of them, worked, and others don’t.
But some of them, I think should work, even when it doesn’t.

Seems like firewalld is not honouring its rich rules.
As far as I understood, you put a connection in one zone
(and only one) by its interface or source, and then,
the zone rules apply for the connection:
if there are no rich rules in the zone,
the target of the zone (ACCEPT,DROP, etc)
applies for that connection
if there are rich rules, and one matchs the connection,
then that rich rule is applied to that connection.
And, the first match ALWAYS wins.

What follows, is some output from my terminal, with
different tries, and each try is labeled, saying if it worked or not,
to give the docker container the hability to ping by domain.

In my opinion, all for the tries bellow should do the trick…
some of them, are doing things that I don’t think are necessary,
like the tries that change the target of the zone to DROP:
I did it, because I thought that maybe, the default target was buggy.
So, my question is, why doesn’t work, when it doesn’t?
What is wrong?

defining variables for the container ip and container interface, to easily refer to them from now on:

CONTAINER_IP="172.18.0.2"
CONTAINER_INTERFACE="br-71fe7cc090b3"

defining a function to easily restore the firewalld conf to default, and this way, I can completely restore the firewald conf before each try, knowing that all tries act on the same initial conf:

_restore_firewalld() {
    sudo cp -Ta /usr/lib/firewalld/ /etc/firewalld/ && 
    sudo restorecon -r /etc/firewalld/ && 
    sudo firewall-cmd --complete-reload && 
    sudo firewall-cmd --set-log-denied=unicast ##to log rejects
}

WORKS on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=docker --add-source="http://serverfault.com/$CONTAINER_IP" && 
$ sudo firewall-cmd --reload && 
success
Warning: ALREADY_SET: unicast
success
success
success

$ sudo firewall-cmd --info-zone=docker 
docker (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: docker0
  sources: 172.18.0.2
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

WORKS on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=trusted --add-source="http://serverfault.com/$CONTAINER_IP" && 
$ sudo firewall-cmd --reload && 
success
Warning: ALREADY_SET: unicast
success
success
success

$ sudo firewall-cmd --info-zone=trusted 
trusted (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: 
  sources: 172.18.0.2
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

WORKS on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=docker --add-interface="$CONTAINER_INTERFACE" && 
$ sudo firewall-cmd --reload && 
success
Warning: ALREADY_SET: unicast
success
success
success

$ sudo firewall-cmd --info-zone=docker 
docker (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: br-71fe7cc090b3 docker0
  sources: 
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

WORKS on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=trusted --add-interface="$CONTAINER_INTERFACE" && 
$ sudo firewall-cmd --reload && 
success
Warning: ALREADY_SET: unicast
success
success
success

$ sudo firewall-cmd --info-zone=trusted 
trusted (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: br-71fe7cc090b3
  sources: 
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

WORKS on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=public --set-target=ACCEPT && 
$ sudo firewall-cmd --permanent --zone=public --add-source="http://serverfault.com/$CONTAINER_IP" && 
$ sudo firewall-cmd --reload && 
success
Warning: ALREADY_SET: unicast
success
success
success

$ sudo firewall-cmd --info-zone=public 
success
public (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: 
  sources: 172.18.0.2
  services: dhcpv6-client mdns ssh
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

DOES NOT WORK on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=docker --add-rich-rule="rule family=ipv4 source address="http://serverfault.com/$CONTAINER_IP" accept" && 
$ sudo firewall-cmd --reload
success
Warning: ALREADY_SET: unicast
success
success
success

$ sudo firewall-cmd --info-zone=docker 
docker (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: docker0
  sources: 
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
    rule family="ipv4" source address="172.18.0.2" accept

DOES NOT WORK on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=public --add-rich-rule="rule family=ipv4 source address="http://serverfault.com/$CONTAINER_IP" accept" && 
$ sudo firewall-cmd --reload
success
Warning: ALREADY_SET: unicast
success
success
success

$ sudo firewall-cmd --info-zone=public 
public
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client mdns ssh
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
    rule family="ipv4" source address="172.18.0.2" accept

DOES NOT WORK on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=public --add-source="http://serverfault.com/$CONTAINER_IP" && 
$ sudo firewall-cmd --permanent --zone=public --add-rich-rule="rule family=ipv4 source address="http://serverfault.com/$CONTAINER_IP" accept" && 
$ sudo firewall-cmd --reload
success
Warning: ALREADY_SET: unicast
success
success
success
success

$ sudo firewall-cmd --info-zone=public 
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 172.18.0.2
  services: dhcpv6-client mdns ssh
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
    rule family="ipv4" source address="172.18.0.2" accept

DOES NOT WORK on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=public --set-target=DROP && 
$ sudo firewall-cmd --permanent --zone=public --add-rich-rule="rule family=ipv4 source address="http://serverfault.com/$CONTAINER_IP" accept" && 
$ sudo firewall-cmd --reload
success
Warning: ALREADY_SET: unicast
success
success
success
success

$ sudo firewall-cmd --info-zone=public 
public
  target: DROP
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client mdns ssh
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
    rule family="ipv4" source address="172.18.0.2" accept

DOES NOT WORK on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=public --set-target=DROP && 
$ sudo firewall-cmd --permanent --zone=public --add-source="http://serverfault.com/$CONTAINER_IP" && 
$ sudo firewall-cmd --permanent --zone=public --add-rich-rule="rule family=ipv4 source address="http://serverfault.com/$CONTAINER_IP" accept" && 
$ sudo firewall-cmd --reload
success
Warning: ALREADY_SET: unicast
success
success
success
success
success

$ sudo firewall-cmd --info-zone=public 
public (active)
  target: DROP
  icmp-block-inversion: no
  interfaces: 
  sources: 172.18.0.2
  services: dhcpv6-client mdns ssh
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
    rule family="ipv4" source address="172.18.0.2" accept

DOES NOT WORK on its own, from default restored firewalld conf:

$ _restore_firewalld && /
$ sudo firewall-cmd --permanent --zone=public --add-interface="$CONTAINER_INTERFACE" && 
$ sudo firewall-cmd --permanent --zone=public --add-source="http://serverfault.com/$CONTAINER_IP" && 
$ sudo firewall-cmd --permanent --zone=public --add-rich-rule="rule family=ipv4 source address="http://serverfault.com/$CONTAINER_IP" accept" && 
$ sudo firewall-cmd --reload
success
Warning: ALREADY_SET: unicast
success
success
success
success
success

$ sudo firewall-cmd --info-zone=public 
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: br-71fe7cc090b3
  sources: 172.18.0.2
  services: dhcpv6-client mdns ssh
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
    rule family="ipv4" source address="172.18.0.2" accept