Bonjour,
Mes sites (fichiers PHP, base de données...) ont fait un bon de 8 ans en arrière…
Cela peut-il être un bug de RAID ? est il possible de récupérer les données récentes?
**Voici l'histoire :**
* sites inaccessibles
* connexion ssh habituelle indique mot de passe invalide
* reboot en rescue mode, montage de partition (peut être une erreur de ma part à ce moment là? j 'ai notamment lancé un > mount /dev/md2 /mnt), chroot et changement des mot de passe
* reboot du serveur : sites et base de donnée en version 2015...
D'avance merci à qui pourra m'aider à sortir de ce cauchemar !
**Quelques infos techniques et commandes lancées :**
Je recevais quotidiennement ce type de mail
This is an automatically generated mail message from mdadm
running on xxxxxxxxx.com
A DegradedArray event had been detected on md device /dev/md2.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid1 sdb1[1]
20971456 blocks [2/1] [_U]
md2 : active raid1 sdb2[1]
955260864 blocks [2/1] [_U]
unused devices:
Commandes lancées sur le serveur :
_root@rescue-customer-eu (ksxxxxxx.kimsufi.com) /var/log # mount /dev/md2 /mnt_
_root@rescue-customer-eu (ksxxxxxx.kimsufi.com) /var/log # df -h_
_Filesystem Size Used Avail Use% Mounted on_
_devtmpfs 7.6G 0 7.6G 0% /dev_
_tmpfs 16G 0 16G 0% /dev/shm_
_tmpfs 100M 4.2M 96M 5% /run_
_tmpfs 5.0M 0 5.0M 0% /run/lock_
_tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup_
_tmpfs 7.8G 4.0K 7.8G 1% /tmp_
_tmpfs 7.8G 524K 7.8G 1% /var/log_
_tmpfs 1.6G 0 1.6G 0% /run/user/65534_
_tmpfs 1.6G 0 1.6G 0% /run/user/0_
_root@rescue-customer-eu (ksxxxxxx.kimsufi.com) ~ # mkdir -p /mnt/md1_
_root@rescue-customer-eu (ksxxxxxx.kimsufi.com) ~ # mount /dev/md1 /mnt/md1_
_root@rescue-customer-eu (ksxxxxxx.kimsufi.com) ~ # lsblk_
_NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT_
_sda 8:0 0 931.5G 0 disk_
_├─sda1 8:1 0 20G 0 part_
_│ └─md1 9:1 0 20G 0 raid1 /mnt/md1_
_├─sda2 8:2 0 911G 0 part_
_│ └─md2 9:2 0 911G 0 raid1 /mnt/md2_
_└─sda3 8:3 0 513M 0 part_
_root@rescue-customer-eu (ksxxxxxx.kimsufi.com) /mnt/md1 # chroot /mnt/md1/_
_root@rescue-customer-eu:/# passwd_
Kimsufi raid1 a restauré une ancienne version du disque
Related questions
- Proxmox VM accès internet impossible
55425
19.11.2016 12:11
- Spam et IP bloquée
52786
12.12.2016 11:53
- il y a quelqu'un ?
51985
15.12.2025 17:01
- Mise en place de VM avec IP publique sur Proxmox 6 [RESOLU]
50665
30.04.2020 17:12
- SSD NVMe Soft Raid ou SSD SATA Hard Raid
50275
29.06.2021 23:29
- Port 25 bloqué pour spam à répétition
47522
28.02.2018 13:39
- Mise à jour PHP sur Release 3 ovh
46928
11.03.2017 17:43
- Identification carte réseau
45873
05.12.2025 10:09
- Connection smtp qui ne marche plus : connect error 10060
45228
12.04.2019 10:10
- Partition sur le disque de l'OS ESXI
44917
09.05.2017 14:33
Si votre disque sda était plus ou moins crashé, vous avez vraiment laissé pourrir la situation ?
Depuis quand n'aviez-vous pas rebooté votre serveur ?
Que dit /proc/mdstat aujourd'hui ?
Si sda a réellement été recopié sur sdb, vous n'aurez comme seule possibilité de faire appel à des backups que vous auriez pris en dehors du serveur. S'il n'y en pas pas, c'est mal barré.
remonter vos backups....
En dehors de ça, prier le Seigneur, allumer quelques (centaines ?) de cierges, en espérant que les 2 disques ne soient pas synchronisés et que l'un des disques a les datas "à jour"... Repasser en rescue, et remonter les partitions en direct, et non pas via mdx mais via sdax et sdbx... Forte chance que ça ne fonctionne pas...
Et si pas de backups... Se souvenir que la base c'est d'avoir des backups...
Bonjour,
Depuis quand ?
Cordialement, janus57
Une release3 sous CentOS 6 ?
Vous devriez faire ceci:
rebooter en rescue (on n'est plus à ça près)
effectuer les commandes suivantes (qui ne risquent pas de modifier quoi que ce soit):
cat /proc/mdstat
smartctl -a /dev/sda
smartctl -a /dev/sdb
fdisk -l /dev/sda
fdisk -l /dev/sdb
Recopier ici la réponse de ces 5 commandes (c'est assez long)
et ne toucher à rien, quoi que ce soit probablement trop tard.
Merci, pour ces pistes ! Je vais lancer les commandes et en poster les résultats.
J'ai tout de même des backups manuelles, ouf...
Je n'avais pas conscience de la signification de ces mails quotidiens, le serveur tournait "parfaitement" vu de l'extérieur et ne savais pas comment résoudre le problème donc oui j'ai laissé "pourrir" depuis quelques années. Ce que je ne ferai plus évidemment.
**Smartctl open device: /dev/sdb failed: No such device**
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md2 : active raid1 sda2[0]
955260864 blocks [2/1] [U_]
md1 : active raid1 sda1[0]
20971456 blocks [2/1] [U_]
unused devices:
----------
smartctl -a /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-6.1.51-mod-std] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Toshiba 3.5" DT01ACA... Desktop HDD
Device Model: TOSHIBA DT01ACA100
Serial Number: 33A65XMNS
LU WWN Device Id: 5 000039 ff6c2d076
Firmware Version: MS2OA750
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Nov 14 04:56:43 2023 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 7701) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection on/off supp ort.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 129) minutes.
SCT capabilities: (0x003d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_ FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 095 095 016 Pre-fail Always - 655363
2 Throughput_Performance 0x0005 139 139 054 Pre-fail Offline - 81
3 Spin_Up_Time 0x0007 127 127 024 Pre-fail Always - 181 (Average 181)
4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 34
5 Reallocated_Sector_Ct 0x0033 053 053 005 Pre-fail Always - 989
7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0
8 Seek_Time_Performance 0x0005 118 118 020 Pre-fail Offline - 33
9 Power_On_Hours 0x0012 087 087 000 Old_age Always - 92162
10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 34
192 Power-Off_Retract_Count 0x0032 084 084 000 Old_age Always - 20396
193 Load_Cycle_Count 0x0012 084 084 000 Old_age Always - 20396
194 Temperature_Celsius 0x0002 171 171 000 Old_age Always - 35 (Min/Max 21/50)
196 Reallocated_Event_Count 0x0032 001 001 000 Old_age Always - 2511
197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 48
198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0
SMART Error Log Version: 1
ATA Error Count: 22 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 22 occurred at disk power-on lifetime: 50309 hours (2096 days + 5 hours)
When the command that caused the error occurred, the device was active or idle .
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 30 10 42 61 04 Error: UNC at LBA = 0x04614210 = 73482768
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 40 00 00 42 61 40 00 4d+20:09:09.002 READ FPDMA QUEUED
60 40 00 40 42 61 40 00 4d+20:09:08.986 READ FPDMA QUEUED
60 40 00 80 42 61 40 00 4d+20:09:08.978 READ FPDMA QUEUED
60 40 00 c0 42 61 40 00 4d+20:09:08.970 READ FPDMA QUEUED
60 40 00 00 43 61 40 00 4d+20:09:08.962 READ FPDMA QUEUED
Error 21 occurred at disk power-on lifetime: 15166 hours (631 days + 22 hours)
When the command that caused the error occurred, the device was active or idle .
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 28 d8 49 60 04 Error: UNC at LBA = 0x046049d8 = 73419224
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 80 58 00 4f 60 40 00 3d+08:25:49.129 READ FPDMA QUEUED
60 80 50 80 4e 60 40 00 3d+08:25:49.128 READ FPDMA QUEUED
60 80 48 00 4e 60 40 00 3d+08:25:49.127 READ FPDMA QUEUED
60 80 40 80 4d 60 40 00 3d+08:25:49.126 READ FPDMA QUEUED
60 80 38 00 4d 60 40 00 3d+08:25:49.125 READ FPDMA QUEUED
Error 20 occurred at disk power-on lifetime: 15163 hours (631 days + 19 hours)
When the command that caused the error occurred, the device was active or idle .
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 78 88 1a 80 02 Error: UNC at LBA = 0x02801a88 = 41949832
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 80 20 00 1c 80 40 00 3d+05:44:34.231 READ FPDMA QUEUED
60 00 18 00 18 80 40 00 3d+05:44:34.231 READ FPDMA QUEUED
60 00 10 00 14 80 40 00 3d+05:44:34.231 READ FPDMA QUEUED
60 00 08 00 10 80 40 00 3d+05:44:34.231 READ FPDMA QUEUED
61 08 00 80 0f 80 40 00 3d+05:44:34.231 WRITE FPDMA QUEUED
Error 19 occurred at disk power-on lifetime: 15163 hours (631 days + 19 hours)
When the command that caused the error occurred, the device was active or idle .
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 50 b0 ea 00 02 Error: UNC at LBA = 0x0200eab0 = 33614512
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 80 00 00 f9 00 40 00 3d+05:43:44.975 READ FPDMA QUEUED
60 80 60 80 f8 00 40 00 3d+05:43:44.975 READ FPDMA QUEUED
60 80 50 00 f8 00 40 00 3d+05:43:44.974 READ FPDMA QUEUED
60 80 48 80 f7 00 40 00 3d+05:43:44.974 READ FPDMA QUEUED
60 80 40 00 f7 00 40 00 3d+05:43:44.974 READ FPDMA QUEUED
Error 18 occurred at disk power-on lifetime: 14491 hours (603 days + 19 hours)
When the command that caused the error occurred, the device was active or idle .
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 68 98 f8 ff 01 Error: UNC at LBA = 0x01fff898 = 33552536
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 80 00 80 07 00 40 00 04:32:51.168 READ FPDMA QUEUED
60 80 f0 00 07 00 40 00 04:32:51.168 READ FPDMA QUEUED
60 80 e8 80 06 00 40 00 04:32:51.168 READ FPDMA QUEUED
60 80 e0 00 06 00 40 00 04:32:51.168 READ FPDMA QUEUED
60 80 d8 80 05 00 40 00 04:32:51.168 READ FPDMA QUEUED
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA _of_first_error
# 1 Short offline Completed without error 00% 1019 -
# 2 Short offline Completed without error 00% 1013 -
# 3 Short offline Completed without error 00% 1013 -
# 4 Short offline Completed without error 00% 10 -
# 5 Short offline Completed without error 00% 4 -
# 6 Short offline Completed without error 00% 3 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
----------
# smartctl -a /dev/sdb
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-6.1.51-mod-std] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
Smartctl open device: /dev/sdb failed: No such device
----------
# fdisk -l /dev/sda
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: TOSHIBA DT01ACA1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x000e526f
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 41947135 41943040 20G fd Linux raid autodetect
/dev/sda2 41947136 1952468991 1910521856 911G fd Linux raid autodetect
/dev/sda3 1952468992 1953519615 1050624 513M 82 Linux swap / Solaris
----------
# fdisk -l /dev/sdb
fdisk: cannot open /dev/sdb: No such file or directory
A DegradedArray event had been detected on md device /dev/md2.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid1 sda1[0]
20971456 blocks [2/1] [U_]
md2 : active raid1 sda2[0]
955260864 blocks [2/1] [U_]
unused devices:
une année
Le disque /dev/sdb est absolument mort.
Le disque /dev/sda est presque mort.
La perte de vos données est le résultat de votre négligence. Je suis désolé de devoir le dire aussi crûment. Heureusement il vous reste vos backups manuels.
merci pour votre confirmation et le partage de vos compétences.