Hello,
Starting on 2025-11-12, we will change the partitioning layout for new Debian 13 and Proxmox VE 9 installations on UEFI boot Bare Metal servers started from the OVHcloud control panel.
Before the change, we created multiple EFI System Partitions (ESPs) on UEFI boot servers. This is what it looked like:
root@debian13-multiple-esps:~# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
nvme1n1
├─nvme1n1p1 vfat FAT16 EFI_SYSPART 16C3-1FAC 510.6M 0% /boot/efi
├─nvme1n1p2 linux_raid_member 1.2 md2 bd2a7a2a-515a-72a1-b8aa-666d091d56df
│ └─md2 ext4 1.0 boot 5ffc56b9-d758-465f-90af-fc8091e26534 840.5M 8% /boot
├─nvme1n1p3 linux_raid_member 1.2 md3 204a16e1-c4df-0ec2-dd3e-740374eddf60
│ └─md3 ext4 1.0 root 5cece5f7-1800-4989-bbbd-a6c60b0c93d2 387.6G 0% /
└─nvme1n1p4 swap 1 swap-nvme1n1p4 1da3be0d-f5d6-4f1e-81c5-bd23147deefd [SWAP]
nvme0n1
├─nvme0n1p1 vfat FAT16 EFI_SYSPART 16AA-63D3
├─nvme0n1p2 linux_raid_member 1.2 md2 bd2a7a2a-515a-72a1-b8aa-666d091d56df
│ └─md2 ext4 1.0 boot 5ffc56b9-d758-465f-90af-fc8091e26534 840.5M 8% /boot
├─nvme0n1p3 linux_raid_member 1.2 md3 204a16e1-c4df-0ec2-dd3e-740374eddf60
│ └─md3 ext4 1.0 root 5cece5f7-1800-4989-bbbd-a6c60b0c93d2 387.6G 0% /
├─nvme0n1p4 swap 1 swap-nvme0n1p4 c194279f-d6f8-42f4-90be-218b682c897a [SWAP]
└─nvme0n1p5 iso9660 Joliet Extension config-2 2025-10-08-14-13-17-00
root@debian13-multiple-esps:~# grep /boot/efi /etc/fstab
LABEL=EFI_SYSPART /boot/efi vfat defaults 0 1
Both ESPs were synced when the OS was installed, but not afterwards. Having only one ESP mounted at a given time could lead to them getting out of sync. That resulted in several issues, the worst of which prevented the system from booting after major GRUB upgrades.
To solve this problem, any Debian 13 or Proxmox VE 9 installation started after 2025-11-12 will follow a new layout where the ESP is created on top of an md RAID1 array:
root@debian13-esp-over-raid1:~# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
nvme1n1
├─nvme1n1p1 linux_raid_member 0.90.0 fbb410d2-eed5-8b9c-0edd-9cfbcf09b877
│ └─md1 vfat FAT16 EFI_SYSPART 20B8-687C 510.5M 0% /boot/efi
├─nvme1n1p2 linux_raid_member 1.2 md2 6ef065ad-16e9-52ad-384c-94e574f41f0e
│ └─md2 ext4 1.0 boot 5d8c496e-5aed-4f04-a8a4-2a4690b7e556 840.5M 8% /boot
├─nvme1n1p3 linux_raid_member 1.2 md3 c7110db8-4597-ab47-5f18-3d7634ff3347
│ └─md3 ext4 1.0 root f97ca283-2a1e-4222-80a6-6abcd22f5d5d 387.6G 0% /
├─nvme1n1p4 swap 1 swap-nvme0n1p4 198999e8-1150-46e9-a342-4c84ca48aa61 [SWAP]
└─nvme1n1p5 iso9660 Joliet Extension config-2 2025-10-08-15-24-14-00
nvme0n1
├─nvme0n1p1 linux_raid_member 0.90.0 fbb410d2-eed5-8b9c-0edd-9cfbcf09b877
│ └─md1 vfat FAT16 EFI_SYSPART 20B8-687C 510.5M 0% /boot/efi
├─nvme0n1p2 linux_raid_member 1.2 md2 6ef065ad-16e9-52ad-384c-94e574f41f0e
│ └─md2 ext4 1.0 boot 5d8c496e-5aed-4f04-a8a4-2a4690b7e556 840.5M 8% /boot
├─nvme0n1p3 linux_raid_member 1.2 md3 c7110db8-4597-ab47-5f18-3d7634ff3347
│ └─md3 ext4 1.0 root f97ca283-2a1e-4222-80a6-6abcd22f5d5d 387.6G 0% /
└─nvme0n1p4 swap 1 swap-nvme1n1p4 a828894f-c367-439c-bb63-c4dd9e967faa [SWAP]
root@debian13-esp-over-raid1:~# grep /boot/efi /etc/fstab
LABEL=EFI_SYSPART /boot/efi vfat defaults 0 1
Please note that the FAT filesystem still uses the same EFI_SYSPART label, meaning there are no changes to the fstab. We use metadata version 0.90 so that the md superblock is located at the end of the partition. This way, at boot, the firmware treats nvme0n1p1 or nvme1n1p1 as normal FAT partitions and is able to chain onto the bootloader.
Going forward, this new layout will be applied to all Linux OSes newly added to the the catalog.
We may also also apply this change to other existing Linux OSes in the future, in which case I will update this topic.
This change does not affect:
- legacy boot servers
- UEFI boot servers when the OS is installed on one disk (no RAID so only one normal ESP)
Please note that, with the new ESP layout,
grub-installfails if it is called without--no-nvrambecause it attempts to add boot entries for the md RAID1 array, which does not make sense to the EFI stack.This works:
This fails:
In any case, you should not alter an OVHcloud server's boot order as that would break PXE boot and later prevent the server from booting into rescue mode. When booting to disk, servers execute iPXE which then chainloads GRUB (or something else, depending on the value of the server's efiBootloaderPath attribute).
On Debian-based distributions, this GRUB setting is managed by the
grub-efi-amd64package with the following prompt:Corresponding to the following debconf option: