firewalld manage internal connection only on redhat 7

we have infrastructure that contains hardware firewall that manage connections from outside. and the OS firewall is disabled, but for some reasons we need to open the OS firewall and it will be a headache to apply every rule on both hardware and OS firewalls.
So is there is anyway to make the firewall manage only connections between the servers (layer 2) and keep the out connections for the hardware firewall?
OS: Redhat 7

redhat – yum-cron enabled by default on GCP

Any one has any idea why Google or Redhat has enable yum-cron on RHEL7 VMs on GCP. yum-cron-3.4.3-167.el7.noarch is preinstalled and is enabled on the VM. Client never wants their systems to be automatically patched. This is disastrous.

because of this the whole group of server when down as the servers were on schedule reboot and there was a bug in shim-x64-15-7.el7_8.x86_64.

Can any one suggest this issue to Google or RHEL who-so-ever is responsible for this mistake.

Screenshot attached for reference:

systemctl is-enabled yum-cron.service

systemctl status yum-cron.service

redhat – RHEL + OS with XFS file system while adding additional partition with diff definition

We have the following disk – sda ( in server production – very important server )

sda is the disk of the OS ( rhel 7.2 ) , and file system is xfs

sda is physical disk

Xfs created with – mkfs.xfs -f -n ftype=0 /dev/sda

sda                8:0    0 558.4G  0 disk
├─sda1             8:1    0   500M  0 part /boot
└─sda2             8:2    0 557.9G  0 part
  ├─rhel0-lv_root 253:0    0    50G  0 lvm  /
  ├─rhel0-lv_swap 253:1    0    16G  0 lvm  (SWAP)
  └─rhel0-lv_var  253:2    0   100G  0 lvm  /var
sdb                8:16   0   5.5T  0 disk /var/DB
(root@server_prod01 ~)# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel0 lvm2 a--  557.88g 391.88g

Since we have from Pfree 391G

We want to add additional partition – sda3 and builds xfs filesystem on sda3 but little different as mkfs.xfs -f -n ftype=1

Is it possible ?

redhat – Templating firewalld zones with ansible – issue with xml manipulation

With ansible 2.9 on RHEL7.6 I’m trying to configure individual firewalld zones which also includes configuration of rich rules.
It all works fine, except when I’m trying to template adding a rich rule in. In the example below, I’m attempting to add a rich rule allowing VRRP traffic.

Ansible task:

    - name: Configure firewalld zones
        src: zone_template.xml.j2
        dest: /etc/firewalld/zones/{{ }}.xml
      with_items: "{{ firewalld_zones }}"
      notify: reload firewalld
        label: "{{ }}"

The variable firewalld_zones is defined in my defaults/main.yml as the following:

  - name: public
    short: "Public"
    description: "Public Zone"
      - { port: 300, protocol: tcp }
      - { port: 300, protocol: udp }
      - protocol:
          - value: "vrrp"
          - accept: yes

Snippet of my template zone_template.xml.j2:

<?xml version="1.0" encoding="utf-8"?>
<zone{% if is defined %} target="{{ }}"{% endif %}>
  <short>{{ item.short|default(|upper  }}</short>
{% if item.description is defined %}
  <description>{{ item.description }}</description>
{% endif %}
{% for tag in item %}
{# Settings which can be used several times #}
{% if tag in ('interface','source','service','port','protocol','icmp-block','forward-port','source-port') %}
{% for subtag in item(tag) %}
  <{{ tag }}{% for name,value in subtag.items() %} {{ name }}="{{ value }}"{% endfor %}/>
{% endfor %}
{# Settings which can be used once #}
{% elif tag in ('icmp-block-inversion','masquerade') and item(tag) == True %}
  <{{ tag }}/>
{% endif %}
{% endfor %}
{% for rule in item.rule|default(()) %}
  <rule{% if is defined %} family="{{ }}"{% endif %}>
{% for tag in rule %}
{% if tag in ('source','destination','service','port','icmp-block','icmp-type','masquerade','forward-port','protocol') %}
{% for subtag in rule(tag) %}
{% if subtag in ('accept') %}
    <{% for name,value in subtag.items() %}{{ name }}{% endfor %}/>
{% endif %}
    <{{ tag }}{% for name,value in subtag.items() %} {{ name }}="{{ value }}"{% endfor %}/>
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}

With this I get :

<?xml version="1.0" encoding="utf-8"?>
  <description>Public Zone</description>
  <port protocol="tcp" port="300"/>
  <port protocol="udp" port="300"/>
    <protocol value="vrrp"/>
    <protocol accept="yes"/>

What I’m trying to get is this:

<?xml version="1.0" encoding="utf-8"?>
  <description>Public Zone</description>
  <port protocol="tcp" port="300"/>
  <port protocol="udp" port="300"/>
    <protocol value="vrrp"/>

What do I need to change (template and/or variables) to achieve this?


redhat enterprise linux – problems with usernames in docker

Good afternoon, I am currently configuring the user name for my containers in the daemon.json file and following the tutorials that I post below:

I am using the following documentation:

(dockermd)# sudo systemctl restart docker
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

Running systemctl status docker.service

(dockermd)# sudo systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/docker.service.d
   Active: failed (Result: exit-code) since Wed 2020-08-05 14:09:34 -04; 2s ago
  Process: 2480121 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
 Main PID: 2480121 (code=exited, status=1/FAILURE)

Aug 05 14:09:34 TMT097 systemd(1): docker.service: Service RestartSec=2s expired, scheduling restart.
Aug 05 14:09:34 TMT097 systemd(1): docker.service: Scheduled restart job, restart counter is at 3.
Aug 05 14:09:34 TMT097 systemd(1): Stopped Docker Application Container Engine.
Aug 05 14:09:34 TMT097 systemd(1): docker.service: Start request repeated too quickly.
Aug 05 14:09:34 TMT097 systemd(1): docker.service: Failed with result 'exit-code'.
Aug 05 14:09:34 TMT097 systemd(1): Failed to start Docker Application Container Engine.

Running dockerd

dockerd unable to configure the Docker daemon with file /etc/docker/daemon.json: open /etc/docker/daemon.json: permission denied

so I have my daemon.json

            "data-root": "/opt/docker",
            "storage-driver": "overlay2",
            "log-driver": "json-file",
            "userns-remap": "10007:10007",
            "log-opts": {

           "max-size": "10m",
                "max-file": "3"

enter image description here

my id user

enter image description here

I need to do this, because for security reasons, the use of root permissions is very limited, so many functionalities of docker root permissions, and I am doing this test with a view to having the least use of root in production

What solution could I apply?

linux – YUM command failed with [Errno 14] curl#60 – “SSL certificate : unable to get local issuer certificate” in REDHAT

i’m getting an error (Errno 14) curl#60 - "SSL certificate problem: unable to get local issuer certificate" when i use yum command to install or update any package.


yum install curl

then it gives and output like this

(root@dtetestmaster svradmin)# yum install curl
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-manager

This system is not registered with an entitlement server. You can use subscription-manager to register.

DialogRHSCLRepo                                                                                                          | 3.4 kB  00:00:00     
DialogRepo                                                                                                               | 3.5 kB  00:00:00 (Errno 14) curl#60 - "SSL certificate problem: unable to get local issuer certificate"
Trying other mirror.
It was impossible to connect to the CentOS servers.
This could mean a connectivity issue in your environment, such as the requirement to configure a proxy,
or a transparent proxy that tampers with TLS security, or an incorrect system clock.
You can try to solve this issue by using the instructions on
If above article doesn't help to resolve this issue please use

 One of the configured repositories failed (Docker CE Stable - x86_64),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Run the command with the repository temporarily disabled
            yum --disablerepo=docker-ce-stable ...

     4. Disable the repository permanently, so yum won't use it by default. Yum
        will then just ignore the repository until you permanently enable it
        again or use --enablerepo for temporary usage:

            yum-config-manager --disable docker-ce-stable
            subscription-manager repos --disable=docker-ce-stable

     5. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice

            yum-config-manager --save --setopt=docker-ce-stable.skip_if_unavailable=true

failure: repodata/repomd.xml from docker-ce-stable: (Errno 256) No more mirrors to try. (Errno 14) curl#60 - "SSL certificate problem: unable to get local issuer certificate"

Please help to resolve this error, Thanks!

redhat – Why after running subprocess.Popen(), not able to write to a file?

I am not able to write to a file after

> Popen() runs

conf_file='/home/'+ service_ac +'/conf/keep-allGateways-alive.cfg'

def find_gw_ips():
        fp.write('hello') #this gets printed in file
        t = subprocess.Popen("ifconfig", stdout=subprocess.PIPE)
        fp.write('hello_2') #this does not get printed in file


I use crontab to run this script, without crontab it runs good, without any issues

* * * * * python /home/vtu/bin/ vtu >/tmp/out.txt 2>&1

I am using RHEL 7.4, python 2.7.5

I tried redirecting and got errors:

Traceback (most recent call last):
  File "/home/vru/bin/", line 34, in <module>
  File "/home/vru/bin/", line 30, in find_gw_ips
    t = subprocess.Popen("ifconfig", stdout=subprocess.PIPE)
  File "/usr/lib64/python2.7/", line 711, in __init__
    errread, errwrite)
  File "/usr/lib64/python2.7/", line 1327, in _execute_child
    raise child_exception
OSError: (Errno 2) No such file or directory

ntp – Redhat Enterprise Linux 7: How to disable DST – Daylight Saving Time

You don’t disable DST per se, you set the desired time zone, and you get DST if the time zone has DST.

You can check the configured time zone with timedatectl.

If your timezone has DST you’ll see something like this:

(root@stonard ~)# timedatectl 
      Local time: Sat 2020-06-20 18:27:30 EDT
  Universal time: Sat 2020-06-20 22:27:30 UTC
        RTC time: Sat 2020-06-20 22:27:30
       Time zone: America/New_York (EDT, -0400)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  Sun 2020-03-08 01:59:59 EST
                  Sun 2020-03-08 03:00:00 EDT
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2020-11-01 01:59:59 EDT
                  Sun 2020-11-01 01:00:00 EST

Otherwise you’ll see something like this:

(root@farshire ~)# timedatectl 
      Local time: Sat 2020-06-20 22:26:50 GMT
  Universal time: Sat 2020-06-20 22:26:50 UTC
        RTC time: Sat 2020-06-20 22:26:50
       Time zone: Etc/GMT (GMT, +0000)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a

To change the time zone, use timedatectl set-timezone ZONE, where ZONE is a valid zoneinfo zone. For example:

# timedatectl set-timezone Europe/Kiev

redhat – Why do I get a curl error when I run the yum update?

I work in a laboratory. There are many Windows PCs in this laboratory, one of which I use.

There is also a proxy server that all PCs can use to connect to the Internet.

I now have a Red Hat 7 computer connected to my PC, which means that it is not connecting to the proxy server.

|          |                  |           |
| Internet |<--- proxy server |<--- my PC |<--- RHEL7
|          |                  |           |

To say that the IP of my PC that the RHEL7 can ping is a.a.a.ais the IP of the proxy server that my PC can ping b.b.b.b and the port of the proxy server is used 8080.

Now I have to let the RHEL7 surf the internet. I did the following:

1) I do the configuration on my PC as follows:

port_forwarding(a.a.a.a, 6113, b.b.b.b, 8080)

2) I do the configuration on the Linux computer as follows:

export http_proxy="http://my_name:my_passwd@a.a.a.a:6113/"
export http_proxys="https://my_name:my_passwd@a.a.a.a:6113/"
export ftp_proxys="ftp://my_name:my_passwd@a.a.a.a:6113/"

Now I can wget on my RHEL7. My configuration works.

Then I run yum makecache and here is the output:

me@localhost:/etc/yum.repos.d$ yum makecache
Loaded plugins: langpacks, product-id, search-disabled-repos
base                                                                                                                                   | 3.6 kB  00:00:00
extras                                                                                                                                 | 2.9 kB  00:00:00
updates                                                                                                                                | 2.9 kB  00:00:00
(1/10): base/x86_64/group_gz                                                                                                           | 165 kB  00:00:00
(2/10): base/x86_64/primary_db                                                                                                         | 6.0 MB  00:00:02
(3/10): extras/x86_64/primary_db                                                                                                       | 165 kB  00:00:00
(4/10): extras/x86_64/filelists_db                                                                                                     | 217 kB  00:00:00
(5/10): base/x86_64/filelists_db                                                                                                       | 7.3 MB  00:00:03
(6/10): extras/x86_64/other_db                                                                                                         | 106 kB  00:00:00
(7/10): base/x86_64/other_db                                                                                                           | 2.6 MB  00:00:00
(8/10): updates/x86_64/filelists_db                                                                                                    | 4.5 MB  00:00:01
(9/10): updates/x86_64/other_db                                                                                                        | 573 kB  00:00:00
(10/10): updates/x86_64/primary_db                                                                                                     | 7.6 MB  00:00:03

It appears that yum makecache works. However, when I run sudo yum updateI get an error message: (Errno 14) curl#6 - "Could not resolve host:; Unknown error"
Trying other mirror. (Errno 14) curl#6 - "Could not resolve host:; Unknown error"

I've tried all the mirrors here:, but I always get the same error.

Incidentally, the issue of curl -v

* About to connect() to proxy port 6113 (#0)
*   Trying
* Connected to ( port 6113 (#0)
* Proxy auth using Basic with user 'me'
> GET HTTP:// HTTP/1.1
> Proxy-Authorization: Basic ejAwNDM2ODgwOnI2Ni0xODE2
> User-Agent: curl/7.29.0
> Host:
> Accept: */*
> Proxy-Connection: Keep-Alive
< HTTP/1.1 301 Moved Permanently
< via: proxy A
< Date: Fri, 10 Apr 2020 03:49:09 GMT
< Server: CloudWAF
< Location:
< Set-Cookie: HWWAFSESID=b0be07ce156888de4e; path=/
< Set-Cookie: HWWAFSESTIME=1586490548595; path=/
< Content-Type: text/html
< Cache-Control: public
< Content-Length: 182
< Proxy-Connection: Keep-Alive

301 Moved Permanently

301 Moved Permanently

* Connection #0 to host left intact