Disk Space Gone
... / Disk Space Gone
BMPCreated with Sketch.BMPZIPCreated with Sketch.ZIPXLSCreated with Sketch.XLSTXTCreated with Sketch.TXTPPTCreated with Sketch.PPTPNGCreated with Sketch.PNGPDFCreated with Sketch.PDFJPGCreated with Sketch.JPGGIFCreated with Sketch.GIFDOCCreated with Sketch.DOC Error Created with Sketch.
Question

Disk Space Gone

by
Jesus Caceres Sanz
Created on 2025-07-22 11:33:27 (edited on 2025-07-28 07:45:57) in Dedicated Servers

A few days ago, I installed an ADVANCE-1 | AMD EPYC 4244P server with 2×960 GB SSD NVMe Soft RAID

When I created the partitions I had approximately a total of 1.8TB but now when I run fdisk -l the total space doesn't appear and I get the error "Partition 6 does not start on physical sector boundary."

# fdisk -l
Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors
Disk model: SAMSUNG MZQL2960HCJR-00A07
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: 68CAA449-95BA-4970-B28C-A76ABCC4DB5A

Device              Start        End    Sectors   Size Type
/dev/nvme0n1p1       2048    1048575    1046528   511M EFI System
/dev/nvme0n1p2    1048576   13385727   12337152   5.9G Linux RAID
/dev/nvme0n1p3   13385728  185399295  172013568    82G Linux RAID
/dev/nvme0n1p4  185399296  196687871   11288576   5.4G Linux filesystem
/dev/nvme0n1p5  196687872 1835087871 1638400000 781.3G Linux RAID
/dev/nvme0n1p6 1875380912 1875384974       4063     2M Linux filesystem

Partition 6 does not start on physical sector boundary.


Disk /dev/nvme1n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors
Disk model: SAMSUNG MZQL2960HCJR-00A07
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: 9DF9C4D5-66C1-4C98-A432-CAA3E36DFFE4

Device             Start        End    Sectors   Size Type
/dev/nvme1n1p1      2048    1048575    1046528   511M EFI System
/dev/nvme1n1p2   1048576   13385727   12337152   5.9G Linux RAID
/dev/nvme1n1p3  13385728  185399295  172013568    82G Linux RAID
/dev/nvme1n1p4 185399296  196687871   11288576   5.4G Linux filesystem
/dev/nvme1n1p5 196687872 1835087871 1638400000 781.3G Linux RAID


Disk /dev/md3: 81.96 GiB, 88002789376 bytes, 171880448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/md2: 5.88 GiB, 6311378944 bytes, 12326912 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/md5: 781.12 GiB, 838725533696 bytes, 1638135808 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes

Can you help me figure out what's going on? Thanks


5 Replies ( Latest reply on 2025-07-28 07:45:57 by
fritz2cat 🇧🇪 🇪🇺
)

Hello,

You should never try to access directly any of the /dev/nvme* partitions, as they are members of a RAID-1 group.

Your usable devices are /dev/md2, /dev/md3 and /dev/md5.

781 GiB (expressed in multiples of 1024) is roughly the same as 960 GB (expressed in multiples of 1000)

Your 2 disks are mirrored, hence you have only half the capacity available.

Should one disk fail, you don't lose all your data, and you can reconstruct the data after the disk has been replaced.

 

Hi,
I recommend using the following commands to help you better understand how partitions and RAID devices are nested:

lsblk # lists devices and partitions, including RAID levels ; if you used raid1, it makes sense that you have less than 1TB usable

lsblk -f # adds info about filesystems

df -h # shows the amount of free disk space on each mountpoint

tune2fs -l /dev/md5 # assuming you used ext4, you will see a certain amount of blocks reserved to the root user which you may want to change with tune2fs -m

 

As for the "Partition 6 does not start on physical sector boundary" warning, it is only shown for the "config-drive" partition, which is used at the first boot of your server when cloud-init configures the machine, see https://cloudinit.readthedocs.io/en/latest/reference/datasources/configdrive.html
As you don't need to access data from this partition, this warning is absolutely harmless, there will be no performance hit caused by the imperfect alignment. You can even delete this partition if you wish, it was only useful when your server was first installed.

Bonjour,

où est la réponse de 10h34 ? Index corrompu dans le forum ?

 

Hello, 

Le soucis est bien identifié, c'est en cours d'investigation par les équipes.

Je te remercie pour l'alerte.

^FabL