Automate adding a worker node to an existing kubernetes cluster with a bash script

I have a bare metal kubernetes cluster setup using 3 nodes, one master and two workers. I have used kubeadm to setup the cluster. Now before setting up the cluster, I have set the hostname of the nodes to master-node, node-1, node-2 respectively, and added entries for them in the /etc/hosts file, in all three nodes.

/etc/hosts master-node node-1 worker-node-1 node-2 worker-node-2

Now, if I want to add another worker node to the cluster, I have written a script to automate adding the new node to the cluster. This is my bash script:

sudo su -
yum update -y
yum install vim -y
hostnamectl set-hostname 'node-1'
cat <<EOF >> /etc/hosts master-node node-1 worker-node-1
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
dnf config-manager --add-repo=
dnf install -y
dnf install docker-ce -y
systemctl enable docker
systemctl start docker
echo '{"exec-opts": ["native.cgroupdriver=systemd"]}' > /etc/docker/daemon.json
systemctl daemon-reload
systemctl restart docker
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
dnf install kubeadm -y
systemctl enable kubelet
systemctl start kubelet
yum install iproute-tc -y

kubeadm join --token xxxx --discovery-token-ca-cert-hash xxxx

But I have to manually add the IP address of the node by connecting to the instance. So, I am not able to achieve this using just the bash script. Also, I have to add that line in /etc/hosts file of the nodes already existing in my cluster. Currently I am doing this manually. How important is this step actually and what changes can I do my script to add the node to cluster just by using this script at the time of launching a new ec2 instance?