You actually needed only the last command, all separate formatting HDDs is not the way btrfs works. It does format and RAID in one action, the profiles options determine what type of RAID it is.
The RAID array can be called by one of the devices, so a /dev/... path, then btrfs scans and finds the other 3 devices as well. But that older method fails if you change amount HDDs or the computersystem. You can always see or use its UUID, that is unique, but for this personal RAID system, I would prefer to identify it by label. In fstab you need then this:Of course remove the mdadm things from your earlier trials. Also I think it is good to reboot, because your Pi might still think it has an mdadm RAID system. That will fail or even damage btrfs structures.
You have forced btrfs-formatted the 4 HDDs, so the mdadm structures are wiped and it won't be found anymore after reboot.
So after reboot, copy files to the RAID filessystem, you can see how btrfs allocates blocks equally on all 4 HDDs with:You can check (and does auto-correct) the RAID system for corrupt blocks by:It will take some time if the filesystem is more filled, status how far it is:I see from btrfs-progs version that you run an older version of RaspberryPi OS. That is not a problem w.r.t. that RAID10, but btrfs is still getting new features, but therefore you need newer kernel and progs.
Also I am not sure if you use 64-bit already. 32-bit OS cannot do numbers larger than 8TiB. Although 4x 2TB is still smaller, if you add, remove, balance the RAID system, (virtual) numbers go up and will hit that limit. 64-bit does not have this problem.
See also https://btrfs.readthedocs.io/
The RAID array can be called by one of the devices, so a /dev/... path, then btrfs scans and finds the other 3 devices as well. But that older method fails if you change amount HDDs or the computersystem. You can always see or use its UUID, that is unique, but for this personal RAID system, I would prefer to identify it by label. In fstab you need then this:
Code:
LABEL=my4HDDraid10 /mnt/cloud btrfs nofail,noatime 0 0
You have forced btrfs-formatted the 4 HDDs, so the mdadm structures are wiped and it won't be found anymore after reboot.
So after reboot, copy files to the RAID filessystem, you can see how btrfs allocates blocks equally on all 4 HDDs with:
Code:
sudo btrfs device usage /mnt/cloud
Code:
sudo btrfs scrub start /mnt/cloud
Code:
sudo btrfs scrub status /mnt/cloud
Also I am not sure if you use 64-bit already. 32-bit OS cannot do numbers larger than 8TiB. Although 4x 2TB is still smaller, if you add, remove, balance the RAID system, (virtual) numbers go up and will hit that limit. 64-bit does not have this problem.
See also https://btrfs.readthedocs.io/
Statistics: Posted by redvli — Fri Sep 13, 2024 4:50 am