storage area network – Unable to connect while adding a new node to an OCFS2 cluster

I manage a cluster of 4 servers running Ubuntu 20.04, with a 5th being added. They access a SAN which has been formatted as ocfs2 (Oracle Cluster File System). With the original 4, things were fine. I’m repeating the setup with the 5th node and I’m getting errors which I do not understand.

Mounting gives me an error which suggests I look at dmesg. When I do, I get these errors:

( 2615.303083) o2net: Connection to node 001 (num 0) at 172.16.100.1:7777 shutdown, state 7
( 2615.303125) o2net: Connection to node 002 (num 1) at 172.16.100.2:7777 shutdown, state 7
( 2615.303141) o2net: Connection to node 003 (num 2) at 172.16.100.3:7777 shutdown, state 7
( 2615.303175) o2net: Connection to node 004 (num 3) at 172.16.100.4:7777 shutdown, state 7
( 2615.398706) o2net: No connection established with node 0 after 30.0 seconds, check network and cluster configuration.
( 2615.398716) o2net: No connection established with node 1 after 30.0 seconds, check network and cluster configuration.
( 2615.398720) o2net: No connection established with node 2 after 30.0 seconds, check network and cluster configuration.
( 2615.398723) o2net: No connection established with node 3 after 30.0 seconds, check network and cluster configuration.
( 2617.254678) o2cb: This node could not connect to nodes:
( 2617.254682)  0
( 2617.254689)  1
( 2617.254690)  2
( 2617.254691)  3
( 2617.254693) .
( 2617.254695) o2cb: Cluster check failed. Fix errors before retrying.
( 2617.254701) (mount.ocfs2,6235,1):ocfs2_dlm_init:3348 ERROR: status = -107
( 2617.254791) (mount.ocfs2,6235,1):ocfs2_mount_volume:1821 ERROR: status = -107
( 2617.254799) (mount.ocfs2,6235,1):ocfs2_fill_super:1190 ERROR: status = -107

I can ssh from the new node to all the other nodes and vice-versa. The firewall has been open for all ports for all 5 nodes since they are on a local Intranet. I’ve also ran sudo service ocfs2 restart on all of the nodes.

The /etc/ocfs2/cluster.conf file looks like this (partial):

cluster:
    name = mycluster
    heartbeat_mode = local
    node_count = 5

node:
    cluster = mycluster
    number = 0
    ip_port = 7777
    ip_address = 172.16.100.1
    name = 001

My next idea is to perform a server restart of all servers…but I feel like I’m just grabbing at straws. (I’m also hesitant because the initial 4 nodes are in use by other users.)

Is there something I might have overlooked or that I could test? When I first set up the 4 nodes, they were done “from scratch” at the same time. This is the first time I’ve added a node to the existing cluster so I’m wondering if I’ve missed something…