Dedicated Servers - Upcoming change: EFI System Partition over RAID1 for Linux installations
... / Upcoming change: EFI Syst...
BMPCreated with Sketch.BMPZIPCreated with Sketch.ZIPXLSCreated with Sketch.XLSTXTCreated with Sketch.TXTPPTCreated with Sketch.PPTPNGCreated with Sketch.PNGPDFCreated with Sketch.PDFJPGCreated with Sketch.JPGGIFCreated with Sketch.GIFDOCCreated with Sketch.DOC Error Created with Sketch.
Frage

Upcoming change: EFI System Partition over RAID1 for Linux installations

Von
le_sbraz
Beitragender
Erstellungsdatum 2025-10-08 16:29:14 (edited on 2025-10-10 13:41:23) in Dedicated Servers

Hello,

Starting on 2025-11-12, we will change the partitioning layout for new Debian 13 and Proxmox VE 9 installations on UEFI boot Bare Metal servers started from the OVHcloud control panel.

Before the change, we created multiple EFI System Partitions (ESPs) on UEFI boot servers. This is what it looked like:

root@debian13-multiple-esps:~# lsblk -f
NAME        FSTYPE            FSVER            LABEL          UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
nvme1n1                                                                                                           
├─nvme1n1p1 vfat              FAT16            EFI_SYSPART    16C3-1FAC                             510.6M     0% /boot/efi
├─nvme1n1p2 linux_raid_member 1.2              md2            bd2a7a2a-515a-72a1-b8aa-666d091d56df                
│ └─md2     ext4              1.0              boot           5ffc56b9-d758-465f-90af-fc8091e26534  840.5M     8% /boot
├─nvme1n1p3 linux_raid_member 1.2              md3            204a16e1-c4df-0ec2-dd3e-740374eddf60                
│ └─md3     ext4              1.0              root           5cece5f7-1800-4989-bbbd-a6c60b0c93d2  387.6G     0% /
└─nvme1n1p4 swap              1                swap-nvme1n1p4 1da3be0d-f5d6-4f1e-81c5-bd23147deefd                [SWAP]
nvme0n1                                                                                                           
├─nvme0n1p1 vfat              FAT16            EFI_SYSPART    16AA-63D3                                           
├─nvme0n1p2 linux_raid_member 1.2              md2            bd2a7a2a-515a-72a1-b8aa-666d091d56df                
│ └─md2     ext4              1.0              boot           5ffc56b9-d758-465f-90af-fc8091e26534  840.5M     8% /boot
├─nvme0n1p3 linux_raid_member 1.2              md3            204a16e1-c4df-0ec2-dd3e-740374eddf60                
│ └─md3     ext4              1.0              root           5cece5f7-1800-4989-bbbd-a6c60b0c93d2  387.6G     0% /
├─nvme0n1p4 swap              1                swap-nvme0n1p4 c194279f-d6f8-42f4-90be-218b682c897a                [SWAP]
└─nvme0n1p5 iso9660           Joliet Extension config-2       2025-10-08-14-13-17-00                              
root@debian13-multiple-esps:~# grep /boot/efi /etc/fstab 
LABEL=EFI_SYSPART	/boot/efi	vfat	defaults	0	1

Both ESPs were synced when the OS was installed, but not afterwards. Having only one ESP mounted at a given time could lead to them getting out of sync. That resulted in several issues, the worst of which prevented the system from booting after major GRUB upgrades.

To solve this problem, any Debian 13 or Proxmox VE 9 installation started after 2025-11-12 will follow a new layout where the ESP is created on top of an md RAID1 array:

root@debian13-esp-over-raid1:~# lsblk -f
NAME        FSTYPE            FSVER            LABEL          UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
nvme1n1                                                                                                           
├─nvme1n1p1 linux_raid_member 0.90.0                          fbb410d2-eed5-8b9c-0edd-9cfbcf09b877                
│ └─md1     vfat              FAT16            EFI_SYSPART    20B8-687C                             510.5M     0% /boot/efi
├─nvme1n1p2 linux_raid_member 1.2              md2            6ef065ad-16e9-52ad-384c-94e574f41f0e                
│ └─md2     ext4              1.0              boot           5d8c496e-5aed-4f04-a8a4-2a4690b7e556  840.5M     8% /boot
├─nvme1n1p3 linux_raid_member 1.2              md3            c7110db8-4597-ab47-5f18-3d7634ff3347                
│ └─md3     ext4              1.0              root           f97ca283-2a1e-4222-80a6-6abcd22f5d5d  387.6G     0% /
├─nvme1n1p4 swap              1                swap-nvme0n1p4 198999e8-1150-46e9-a342-4c84ca48aa61                [SWAP]
└─nvme1n1p5 iso9660           Joliet Extension config-2       2025-10-08-15-24-14-00                              
nvme0n1                                                                                                           
├─nvme0n1p1 linux_raid_member 0.90.0                          fbb410d2-eed5-8b9c-0edd-9cfbcf09b877                
│ └─md1     vfat              FAT16            EFI_SYSPART    20B8-687C                             510.5M     0% /boot/efi
├─nvme0n1p2 linux_raid_member 1.2              md2            6ef065ad-16e9-52ad-384c-94e574f41f0e                
│ └─md2     ext4              1.0              boot           5d8c496e-5aed-4f04-a8a4-2a4690b7e556  840.5M     8% /boot
├─nvme0n1p3 linux_raid_member 1.2              md3            c7110db8-4597-ab47-5f18-3d7634ff3347                
│ └─md3     ext4              1.0              root           f97ca283-2a1e-4222-80a6-6abcd22f5d5d  387.6G     0% /
└─nvme0n1p4 swap              1                swap-nvme1n1p4 a828894f-c367-439c-bb63-c4dd9e967faa                [SWAP]
root@debian13-esp-over-raid1:~# grep /boot/efi /etc/fstab 
LABEL=EFI_SYSPART	/boot/efi	vfat	defaults	0	1

Please note that the FAT filesystem still uses the same EFI_SYSPART label, meaning there are no changes to the fstab. We use metadata version 0.90 so that the md superblock is located at the end of the partition. This way, at boot, the firmware treats nvme0n1p1 or nvme1n1p1 as normal FAT partitions and is able to chain onto the bootloader.

Going forward, this new layout will be applied to all Linux OSes newly added to the the catalog.

We may also also apply this change to other existing Linux OSes in the future, in which case I will update this topic.

This change does not affect:

  • legacy boot servers
  • UEFI boot servers when the OS is installed on one disk (no RAID so only one normal ESP)


1 Antwort ( Latest reply on 2025-10-17 17:22:08 Von
le_sbraz
)

Please note that, with the new ESP layout, grub-installfails if it is called without --no-nvram because it attempts to add boot entries for the md RAID1 array, which does not make sense to the EFI stack.

 

This works:

root@esp-over-raid:~# grub-install --no-nvram
Installing for x86_64-efi platform.
Installation finished. No error reported.

 

This fails:

root@esp-over-raid:~# grub-install
Installing for x86_64-efi platform.
grub-install.real: warning: efivarfs_get_variable: open(/sys/firmware/efi/efivars/blk0-47c7b225-c42a-11d2-8e57-00a0c969723b): No such file or directory.
grub-install.real: warning: efi_get_variable: ops->get_variable failed: No such file or directory.
grub-install.real: warning: efi_va_generate_file_device_path_from_esp: could not open device for ESP: Bad address.
grub-install.real: warning: efi_generate_file_device_path_from_esp: could not generate File DP from ESP: Bad address.
grub-install.real: error: failed to register the EFI boot entry: Bad address.

 

In any case, you should not alter an OVHcloud server's boot order as that would break PXE boot and later prevent the server from booting into rescue mode. When booting to disk, servers execute iPXE which then chainloads GRUB (or something else, depending on the value of the server's efiBootloaderPath attribute).

On Debian-based distributions, this GRUB setting is managed by the grub-efi-amd64 package with the following prompt:

Package configuration


 ┌──────────────────────┤ Configuring grub-efi-amd64 ├───────────────────────┐
 │                                                                           │ 
 │ GRUB can configure your platform's NVRAM variables so that it boots into  │ 
 │ Debian automatically when powered on. However, you may prefer to disable  │ 
 │ this behavior and avoid changes to your boot configuration. For example,  │ 
 │ if your NVRAM variables have been set up such that your system contacts   │ 
 │ a PXE server on every boot, this would preserve that behavior.            │ 
 │                                                                           │ 
 │ Update NVRAM variables to automatically boot into Debian?                 │ 
 │                                                                           │ 
 │                    <Yes>                       <No>                       │ 
 │                                                                           │ 
 └───────────────────────────────────────────────────────────────────────────┘ 
                                                                               


Corresponding to the following debconf option:

root@esp-over-raid:~# debconf-show grub-efi-amd64 | grep grub2/update_nvram:
* grub2/update_nvram: false