networking – Calico Pods can’t communicate and keeps deleting interface used by Metallb

I’m trying to set up a cluster having 3 nodes (a Master and 2 Workers).
All 3 nodes have IP addresses that are in the same network (10.20.0.0/24). Calico will use 192.168.0.0/17 for ClusterIP(s) and ServiceIP(s) (I did a split of 192.168.0.0/16 and configured kubeadm init accordingly). I added a new network interface (the devices do not have an IP on the new interface) to be used as Metallb’s pool, the network is of type 10.1.224.0/19. So at first i thought i could just add a static route to the route table of each node so that each node might know that the route exists but Calico will overwrite it every 30 seconds. After that i tried to add a new pool to Calico from which he’s not allow to pick IP(s) for the Kubernetes Network, only to make it aware that it exists.
For the moment Calico does not allow pods to communicate, pinging Pod 1 from within Pod 2 will result in a timeout.
After using traceroute it takes the route of the Node Network (the real IP of the Node) after which the packet is lost.
I initalised the cluster using the following command:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16  --service-cidr=192.168.0.0/16

And installed Claico using the following file calico.yaml file:

curl https://docs.projectcalico.org/manifests/calico.yaml -O

Of which i modified the cidr according to the documentation:

(Configuration image cidr)(1)

After which I applied the following configuration :

apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: k8s-pool
spec:
  cidr: 10.1.224.0/19 
  ipipMode: Never 
  natOutgoing: true
  disabled: true 
  nodeSelector: all()

It had little success, the route stands online for more or less 5 minutes after which it gets deleted.
I can’t make Calico maintain the route up so that Metallb can use it afterward to assign it to a node.
Result of calicoctl get ippool -o wide :

NAME                  CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   SELECTOR   
default-ipv4-ippool   192.168.0.0/17   true   Always     Never       false      all()      
metallb-pool          10.1.224.0/19    true   Never      Never       true       all()   

Result of calicoctl node status

Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 10.1.222.6   | node-to-node mesh | up    | 00:17:10 | Established |
| 10.1.222.7   | node-to-node mesh | up    | 00:17:14 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

Why isn’t the Calico setting not working and how can i prevent it from deleting the routes to the second interface used by Metallb ?
Please if you would like more informations about the setup don’t hesitate to ask.

Current pods running :

kube-system   calico-kube-controllers-7d569d95-dd4hf                              1/1     Running   0          173m
kube-system   calico-node-d55f9                                                   1/1     Running   0          167m
kube-system   calico-node-fbzpq                                                   1/1     Running   0          167m
kube-system   calico-node-t95wl                                                   1/1     Running   0          173m
kube-system   coredns-f9fd979d6-7kj5t                                             1/1     Running   0          177m
kube-system   coredns-f9fd979d6-nz5v9                                             1/1     Running   0          177m
kube-system   etcd-k8s-master-1                      1/1     Running   0          177m
kube-system   kube-apiserver-k8s-master-1            1/1     Running   0          177m
kube-system   kube-controller-manager-k8s-master-1   1/1     Running   0          177m
kube-system   kube-proxy-5zpwg                                                    1/1     Running   0          167m
kube-system   kube-proxy-ts4dn                                                    1/1     Running   0          167m
kube-system   kube-proxy-z4vvw                                                    1/1     Running   0          177m
kube-system   kube-scheduler-k8s-master-1            1/1     Running   0          177m