I have an Ubuntu 20.04.1 server with 6 NVMe SSDs that were configured as raid10 mdadm array md0. In order to troubleshoot my Graphite filesystem corruption problem, I’m trying to use ZFS instead of ext4. I stopped the md0 array via
mdadm --manage --stop /dev/md0 so I could create a zpool, but the array keeps getting recreated within a few seconds as md127 by a
/sbin/mdadm --monitor --scan process, and I see a
(md127_raid10) kernel thread that is managing the array.
Here are my /etc/mdadm/mdadm.conf and /etc/default/mdadm files, untouched from their default contents. Here is the detail output for md127.
How do I properly (i.e. according to Ubuntu and mdadm design principles and not just adding hacking a script as I’ve seen in other questions and blog posts, etc.) completely remove the metadevice so I can reuse the disks in a ZFS pool?