why ansible doesn’t recognize the vcenter windows machines?

I’m pretty new with Ansible so I might configured things wrong
(I have a Docker container running Ansible service in it
I have an Ansible repository that include the Ansible files (this is a .Git repository)

My will was to automatically revert each lab in vCenter server to a specific snapshot
So, I (with the help of ansible-roles-explained-with-examples guide):

  • Created a role with ansible-galaxy init command name vcenter (see directory tree below)
  • Created some vcenter tasks files inside tasks folder (see directory tree below). Here is an example of poweroff.yml task file:
- name: Set the state of a virtual machine to poweroff
  community.vmware.vmware_guest_powerstate:
    hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    folder: "/{{ datacenter_name }}/{{ folder }}"
    # name: "{{ guest_name }}"
    name: "{{ ansible_hostname }}"
    validate_certs: no
    state: powered-off
  delegate_to: localhost
  register: deploy
  • Supplied vCenter credentials in vcentervarsmain.yml file, like this:
# vars file for vcenter
vcenter_hostname: vcenter.foo.com
vcenter_username: hiddai@foo.com
vcenter_password: f#0$o#1$0o
  • Included the tasks in tasksmain.yml file with import-task key, like this:
---
# tasks file for roles/vcenter
- import_tasks: poweroff.yml
# - import_tasks: poweron.yml
# - import_tasks: revert.yml
# - import_tasks: shutdown.yml
  • Created a all.yml inside group_vars folder in inventories library (i don’t know if its a professional way to do like that) that include all winrm details like this:
---
#WinRM Protocol Details
ansible_user: DOMAINuser
ansible_password: f#0$o#1$0o
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_scheme: http
ansible_winrm_server_cert_validation: ignore
ansible_winrm_transport: ntlm
ansible_winrm_read_timeout_sec: 60
ansible_winrm_operation_timeout_sec: 58
  • Created a revert_lab.yml playbook that include the role, like this
---
- name: revert an onpremis lab
  hosts: all
  roles:
  - vcenter

My ansible.cfg is like this:

(defaults)
inventory = /ansible/inventories
roles_path = ./roles:..~/ansible/roles

I executed the playbook to revert all the machines in the lab:

ansible-playbook playbooks/revert_vcenter_lab.yml -i inventories/test/onpremis/domain.com/lab_r.yml

The error I got was:

TASK (Gathering Facts) ****************************************************************************************************************************************************
(WARNING): Error when collecting winrm facts: You cannot call a method on a null-valued expression.  At line:15 char:17  + ...
$ansibleFacts.ansible_win_rm_certificate_expires = $_.Not ...  +                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      + CategoryInfo          :  
InvalidOperation: (:) (), RuntimeException      + FullyQualifiedErrorId : InvokeMethodOnNull      at <ScriptBlock>, <No file>: line 15  at <ScriptBlock>, <No file>: line  
13
ok: (vm1.domain.com)
ok: (vm2.domain.com)
ok: (vm3.domain.com)
ok: (vm4.domain.com)
ok: (vm5.domain.com)
ok: (vm6.domain.com)
ok: (vm7.domain.com)
ok: (vm8.domain.com)

TASK (vcenter : Set the state of a virtual machine to poweroff) ***********************************************************************************************************
fatal: (vm1.domain.com -> localhost): FAILED! => {"changed": false, "msg": "Unable to set power state for non-existing virtual machine : 'VM1'"}
fatal: (vm2.domain.com -> localhost): FAILED! => {"changed": false, "msg": "Unable to set power state for non-existing virtual machine : 'VM2'"}
fatal: (vm3.domain.com -> localhost): FAILED! => {"changed": false, "msg": "Unable to set power state for non-existing virtual machine : 'VM3'"}
fatal: (vm4.domain.com -> localhost): FAILED! => {"changed": false, "msg": "Unable to set power state for non-existing virtual machine : 'VM4'"}
fatal: (vm5.domain.com -> localhost): FAILED! => {"changed": false, "msg": "Unable to set power state for non-existing virtual machine : 'VM5'"}
fatal: (vm6.domain.com -> localhost): FAILED! => {"changed": false, "msg": "Unable to set power state for non-existing virtual machine : 'VM6'"}
fatal: (vm7.domain.com -> localhost): FAILED! => {"changed": false, "msg": "Unable to set power state for non-existing virtual machine : 'VM7'"}
fatal: (vm8.domain.com -> localhost): FAILED! => {"changed": false, "msg": "Unable to set power state for non-existing virtual machine : 'VM8'"}

PLAY RECAP ****************************************************************************************************************************************************************
vm1.domain.com   : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
vm2.domain.com   : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
vm3.domain.com   : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
vm4.domain.com   : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
vm5.domain.com   : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
vm6.domain.com   : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
vm7.domain.com   : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
vm8.domain.com   : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

a) How do I get rid of the Error when collecting winrm facts error? (It is look like that the playbook is not recognize the all.yml file with the win, but why?)
b) How do I fix the error “Unable to set power state for non-existing virtual machine…”? (We can see that the playbook access to the machines by fqdns mentioned in the lab_r.yml file (from the inventories library) but the error relates to the machine name as displayed in the vCenter platform…)

My repository:

C:.
├───ansible
│   │   ansible.cfg
│   ├───inventories
│   │   └───test
│   │       ├───cloud
│   │       └───onpremis
│   │           └───domain.com
│   │               │   lab_j.yml
│   │               │   lab_r.yml
│   │               └───group_vars
│   │                       all.yml
│   ├───playbooks
│   │       revert_lab.yml
│   └───roles
│       └───vcenter
│           ├───tasks
│           │       main.yml
│           │       poweroff.yml
│           │       poweron.yml
│           │       revert.yml
│           │       shutdown.yml
│           └───vars
│                   main.yml

ansible configured iptables in Packer AMI rules not available on the running VM instance

I’ve built an apache nifi ami using packer and packer’s ansible provisioner, NiFi starts bound to only the loopback IP be default, as can be seen from conf/nifi.properties

nifi.web.http.host=127.0.0.1
nifi.web.http.port=8080
nifi.web.http.network.interface.default=

the IP address to bind to cannot be know at the time of making the AMI so we leave this property as it is and create iptables rules instead; as documented here, here and here

The commands I intended to run

sudo sysctl net.ipv4.ip_forward=1
sudo sysctl -w net.ipv4.conf.all.route_localnet=1
sudo iptables -t nat -I PREROUTING -m tcp -p tcp --dport 80 -j DNAT --to-destination 127.0.0.1:8080

Here’s the ansible tasks

- name: All IP forwarding
  sysctl:
    name: net.ipv4.ip_forward
    value: "1"
    sysctl_set: yes
    state: present
    reload: yes
  tags:
    - notest

# Setup iptables port formwading from public ip to 127.0.0.1:8080
# This is because we cannot configure the IP on which NiFi listens at the time
# of building AMI, so it runs listening to 127.0.0.1:8080 only
- name: allow routing of traffic from the attached AWS NIC to loopback
  sysctl:
    name: net.ipv4.conf.all.route_localnet
    value: "1"
    sysctl_set: yes
    state: present
    reload: yes
  tags:
    - notest

- name: Enable portforwarding for Public CIDR block to Nifi localhost port 8080
  iptables:
    table: nat
    chain: PREROUTING
    protocol: tcp
    match: tcp
    destination_port: "80"
    jump: DNAT
    to_destination: "127.0.0.1:8080"
  tags:
    - notest

When I build an instance from this AMI sudo iptables -t nat -v -L PREROUTING -n --line-number returns no result.

Ansible conditional file placement – Server Fault

How do I perform the following with Ansible playbook. I’m new to it. Thanks!

File example that contains the hostname and password

{ 
  "hosts":[
  {
      "node": "node1",
      "pass": "pass1"
  },
  {
      "node": "node2",
      "pass": "pass2"
  }
]}

I need to put file.txt on each server with their respected values.

Example,
On node 1 the file.txt should contain the node name “node1” and password “pass1”

ansible – how to refer to a variable in another host in static inventory host var file?

I have an inventory like following:

inventory/
├── group_vars
│   └── all.yml
├── host_vars
│   ├── serverC.yml
│   ├── master02.yml
│   └── master01.yml
└── hosts.yml

I knew I can dynamically access other host’s variable via hostvars[otherhost][variable]. However, I would like to do similar thing in inventory file:
In serverC.yml:

myvar1: "{{ hostvars['master01']['myvar1'] }}"
myvar2: "{{ hostvars['master02']['myvar2'] }}"

In master02.yml:

myvar2: "{{ hostvars['master01']['myvar2'] }}"

In master01.yml:

myvar1: test1
myvar2: test2

So far the myvar1 is working when run playbook with -l serverC. myvar2 is working when run playbook on master02, too. However, if myvar2 is print as “{{ hostvars[‘master02’][‘myvar2’] }}” when run with -l serverC. Is there any way can make sure the myvar2 correctly expands to test2?

ansible – How to win_ping to hosts with inventory and group_vars files?

  • I’m trying to write a correct command line that will ping to all hosts that are detailed in my inventory file
  • All hosts are Windows OS hosts
  • Below my Ansible library structure
C:.
└───bla_product
    └───core
        ├───ansible
        │   ├───inventories
        │   │   ├───production
        │   │   ├───staging
        │   │   └───test
        │   │       ├───cloud
        │   │       └───onpremis
        │   │           └───domain.com
        │   │               │   lab_x.yml
        │   │               │
        │   │               └───group_vars
        │   │                       windows.yml
        │   │
        │   ├───playbooks
        │   └───roles
  • my inventory file lab_x.yml looks like this:
---
all:
  children:
    root:
      children:
        center:
          children:
            appservers:
              hosts:
                centeriis.domain.com:
                  ansible_host: 200.10.0.100
            qservers:
              hosts:
                centerq.domain.com:
                  ansible_host: 200.10.0.101
            dbservers:
              hosts:
                centerdb.domain.com:
                  ansible_host: 200.10.0.102
        serverfarms:
          hosts:
          children:
            gateways:
              hosts:
        south:
          children:
            brooklyn:
              hosts:
                  srv1.domain.com:
                    ansible_host: 200.10.0.103
              children:
                endpoints:
                  hosts:
                    client1.domain.com:
                      ansible_host: 200.10.0.105
                    client2.domain.com:
                      ansible_host: 200.10.0.106
        north:
          children:
            newyork:
              hosts:
                srv2.domain.com:
                  ansible_host: 200.10.0.104
              children:
                endpoints:
                  hosts:
                    client3.domain.com:
                      ansible_host: 200.10.0.107
  • The windows.yml file include connection details that refers to all hosts, since they are all Windows OS hosts:
---
ansible_connection: winrm
ansible_user: domainuser
ansible_password: password
  • Run the following command ansible all -i lab_r.yml -m win_ping results:
(DEPRECATION WARNING): Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.8 (default, Nov 16 2020, 16:55:22) (GCC
 4.8.5 20150623 (Red Hat 4.8.5-44)). This feature will be removed from ansible-core in version 2.12. Deprecation warnings can be disabled by setting 
deprecation_warnings=False in ansible.cfg.
centeriis.domain.com | UNREACHABLE! => {
    "changed": false,
    "msg": "(Errno None) Unable to connect to port 22 on 200.10.0.100",
    "unreachable": true
}
  • trying this one ansible windows.yml -i lab_r.yml -m win_ping gives:
(DEPRECATION WARNING): Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.8 (default, Nov 16 2020, 16:55:22) (GCC
 4.8.5 20150623 (Red Hat 4.8.5-44)). This feature will be removed from ansible-core in version 2.12. Deprecation warnings can be disabled by setting 
deprecation_warnings=False in ansible.cfg.
(WARNING): Could not match supplied host pattern, ignoring: windows.yml
(WARNING): No hosts matched, nothing to do
  • What am I missing in this “story”?
  • Is the problem with files OR commands?
  • What is the reason that Ansible uses port 22 instead using WinRM protocol?
  • Can win_ping command can works in this phase OR must I hold a playbook and role (task) files in order it to be worked?
  • How do I make this whole business work (a command that uses the files within Inventories and group_vars folder)?

ansible – How to parse a map?

I’m trying to create a list in ansible which consists of some docker container information. First, I’m running a command module which returns this in stdout:

"map[key1:value1 key2:value2 key3:value3]"

How can I parse this further to get the values based on the key that I provide? When I use a map filter, I get this:

"msg": "<generator object do_map at 0x7f3845b8a740>"

If I run the list filter, I just get output as every single character in the map, so [“m”, “a”, “p”, “[“, “k”, …]

What filter should I use?

Ansible shell command tasks not running

I am new to Ansible, I trying Ansible shell module to execute command on the host, But task containing shell is skipping. can some one help me.

Here is the code I am using

- name: To rectrive the process running state
  hosts: localhost
  tasks:
    - name: Change the working directory to somedir/ before executing the command
      shell:
        cmd: ls -l | grep log
        chdir: /home/upgrad/

output

ansible-playbook cmdmod.yml  --check

PLAY [To rectrive the process running state] *******************************************************************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [Change the working directory to somedir/ before executing the command] ***********************************************************************************************************************************
skipping: [localhost]

PLAY RECAP *****************************************************************************************************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

Ansible print debug message result variable

I have a simple task that I cannot overcome.

I have a playbook that returns AWS EC2 instance configuration. I need to only print (display) private_ip_address.

Here my playbook

---
- hosts: local
  connection: local
  gather_facts: false
  become: yes
  become_method: enable

  tasks:

  - name: gather-info-ec2
    community.aws.ec2_instance_info:
      instance_ids:
        - i-XXXXXAAAAAA

    register: ec2

  - debug: msg="{{ ec2.instances.network_interfaces.private_ip_address }}"

When I run it like this, I get the following error.

fatal: (localhost): FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'list object' has no attribute 'network_interfaces'nnThe error appears to be in '/etc/ansible/playbooks/AWSLinuxMigration/gather_ec2_info.yaml': line 16, column 5, but maynbe elsewhere in the file depending on the exact syntax problem.nnThe offending line appears to be:nnn  - debug: msg="{{ ec2.instances.network_interfaces.private_ip_address }}"n    ^ herenWe could be wrong, but this one looks like it might be an issue withnmissing quotes. Always quote template expression brackets when theynstart a value. For instance:nn    with_items:n      - {{ foo }}nnShould be written as:nn    with_items:n      - "{{ foo }}"n"}

When I execute it without DEBUG section and with -vvv it displays the below result. How to I extract and print this address? I shortened it a bit, but you get an idea

ok: (localhost) => {
    "msg": {
        "changed": false,
        "failed": false,
        "instances": (
            {
                "ami_launch_index": 0,
                "architecture": "x86_64",
                "block_device_mappings": (
                    {
                        "device_name": "/dev/xvda",
                        "ebs": {
                            "attach_time": "2020-04-15T16:11:19+00:00",
                            "delete_on_termination": true,
                            "status": "attached",
                            "volume_id": "xxxxxx"
                        }
                    }
                ),
                "capacity_reservation_specification": {
                    "capacity_reservation_preference": "open"
                },
                "client_token": "",
                "cpu_options": {
                    "core_count": 1,
                    "threads_per_core": 2
                },
                "ebs_optimized": true,
                "ena_support": true,
                "enclave_options": {
                    "enabled": false
                },
                "hibernation_options": {
                    "configured": false
                },
                "hypervisor": "xen",
                "iam_instance_profile": {
                    "arn": "xxxxxx",
                    "id": "xxxxxx"
                },
                "image_id": "xxxxx",
                "instance_id": "xxxxx",
                "instance_type": "t3.medium",
                "key_name": "xxxxx",
                "launch_time": "2021-04-21T00:01:25+00:00",
                "metadata_options": {
                    "http_endpoint": "enabled",
                    "http_put_response_hop_limit": 1,
                    "http_tokens": "optional",
                    "state": "applied"
                },
                "monitoring": {
                    "state": "disabled"
                },
                "network_interfaces": (
                    {
                        "association": {
                            "ip_owner_id": "xxxx",
                            "public_dns_name": "xxxxx",
                            "public_ip": "xxxx"
                        },
                        "attachment": {
                            "attach_time": "2020-04-15T16:11:18+00:00",
                            "attachment_id": "xxxxx",
                            "delete_on_termination": true,
                            "device_index": 0,
                            "network_card_index": 0,
                            "status": "attached"
                        },
                        "description": "Primary network interface",
                        "groups": (
                            {
                                "group_id": "xxxxx",
                                "group_name": "xxxxx"
                            }
                        ),
                        "interface_type": "interface",
                        "ipv6_addresses": (),
                        "mac_address": "xxxxx",
                        "network_interface_id": "xxxx",
                        "owner_id": "xxxxx",
                        "private_dns_name": "ip-10-0-1-161.ec2.internal",
                        "private_ip_address": "10.0.1.161",
                        "private_ip_addresses": (
                            {
                                "association": {
                                    "ip_owner_id": "xxxxx",
                                    "public_dns_name": "xxxx.compute-1.amazonaws.com",
                                    "public_ip": "2.2.2.2"
                                },
                                "primary": true,
                                "private_dns_name": "ip-333333.ec2.internal",
                                "private_ip_address": "1.1.1.1."
                            }
                        ),
                        "source_dest_check": true,
                        "status": "in-use"

                    }
                )
        )
    }
}

Using ansible loop until with stdout_lines

I am trying to use ansible loop until the condition is met.
I can use until if the output is only single line, however if the output is multiple lines, I will need to use stdout_lines but fail to do so.

If output is single line:

- name: check on sync status
  shell: some command
  register: sync_status
  until: sync_status.stdout == 'SSUS'

If output is multiple lines, then I try to use stdout_lines

- name: check on sync status
  shell: some command 
  register: sync_status
  until: item.stdout_lines == 'SSUS'
  with_items: "{{ sync_status }}"

but I got variable undefined:

fatal: [xxxxxxx]: FAILED! => {
    "msg": "'sync_status' is undefined"
}

I don’t want to do it on seperate task because then the sync_status is registered on previous task , and I will be comparing the old status instead of the current status.

Kindly assist.