Automate adding a worker node to an existing kubernetes cluster with a bash script

I have a bare metal kubernetes cluster setup using 3 nodes, one master and two workers. I have used kubeadm to setup the cluster. Now before setting up the cluster, I have set the hostname of the nodes to master-node, node-1, node-2 respectively, and added entries for them in the /etc/hosts file, in all three nodes.

/etc/hosts

10.0.1.68 master-node
10.0.29.104 node-1 worker-node-1
10.0.28.246 node-2 worker-node-2

Now, if I want to add another worker node to the cluster, I have written a script to automate adding the new node to the cluster. This is my bash script:

#!/bin/bash
sudo su -
yum update -y
yum install vim -y
hostnamectl set-hostname 'node-1'
cat <<EOF >> /etc/hosts
10.0.1.68 master-node
10.0.29.104 node-1 worker-node-1
EOF
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.4.9-3.1.el7.x86_64.rpm -y
dnf install docker-ce -y
systemctl enable docker
systemctl start docker
echo '{"exec-opts": ["native.cgroupdriver=systemd"]}' > /etc/docker/daemon.json
systemctl daemon-reload
systemctl restart docker
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
dnf install kubeadm -y
systemctl enable kubelet
systemctl start kubelet
yum install iproute-tc -y

kubeadm join --token xxxx --discovery-token-ca-cert-hash xxxx

But I have to manually add the IP address of the node by connecting to the instance. So, I am not able to achieve this using just the bash script. Also, I have to add that line in /etc/hosts file of the nodes already existing in my cluster. Currently I am doing this manually. How important is this step actually and what changes can I do my script to add the node to cluster just by using this script at the time of launching a new ec2 instance?