18.04 – Is there a problem with my Ubuntu RAID array

I have an Ubuntu 18.08 server with two 1Tb drives in a software RAID1 array, configured during the OS installation process. I wanted the check the health of the array and disks this evening and am not sure if there is a problem with one of my drives.

I see in the ‘mdadm –detail /dev/md0‘ output that one of the drives appears to be removed, and I see from another askubuntu question that the (_U) in the ‘cat /proc/mdstat‘ command output is possibly a signal that a partition has failed.

From the results below, has a drive failed? If so, what is the best course of action? Also, how do I set it so it emails me when a drive fails?

sudo fdisk -l

Disk /dev/loop0: 99.2 MiB, 104030208 bytes, 203184 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 99.2 MiB, 104026112 bytes, 203176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 310AB5A9-6622-49D9-82C3-B1F2E53DD560

Device       Start        End    Sectors   Size Type
/dev/sda1     2048    2099199    2097152     1G Linux filesystem
/dev/sda2  2099200 1953521663 1951422464 930.5G Linux filesystem


Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D634E279-44CC-4ED5-B380-07D02C3C3601

Device       Start        End    Sectors   Size Type
/dev/sdb1     2048       4095       2048     1M BIOS boot
/dev/sdb2     4096    2101247    2097152     1G Linux filesystem
/dev/sdb3  2101248 1953521663 1951420416 930.5G Linux filesystem


Disk /dev/md0: 930.4 GiB, 998991986688 bytes, 1951156224 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

lsblk

NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0     7:0    0  99.2M  1 loop  /snap/core/10908
loop2     7:2    0  99.2M  1 loop  /snap/core/10859
sda       8:0    0 931.5G  0 disk
├─sda1    8:1    0     1G  0 part
└─sda2    8:2    0 930.5G  0 part
sdb       8:16   0 931.5G  0 disk
├─sdb1    8:17   0     1M  0 part
├─sdb2    8:18   0     1G  0 part  /boot
└─sdb3    8:19   0 930.5G  0 part
  └─md0   9:0    0 930.4G  0 raid1 /

cat /etc/mdadm/mdadm.conf

ARRAY /dev/md0 metadata=1.2 name=ubuntu-server:0 UUID=1d9d79bd:d675f751:144db975:0d24caa9
MAILADDR root

sudo mdadm –detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Fri Aug 30 21:55:50 2019
        Raid Level : raid1
        Array Size : 975578112 (930.38 GiB 998.99 GB)
     Used Dev Size : 975578112 (930.38 GiB 998.99 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Apr  3 23:18:52 2021
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : ubuntu-server:0
              UUID : 1d9d79bd:d675f751:144db975:0d24caa9
            Events : 1907228

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       19        1      active sync   /dev/sdb3

cat /proc/mdstat

Personalities : (raid1) (linear) (multipath) (raid0) (raid6) (raid5) (raid4) (raid10)
md0 : active raid1 sdb3(1)
      975578112 blocks super 1.2 (2/1) (_U)
      bitmap: 8/8 pages (32KB), 65536KB chunk

unused devices: <none>