Persistant Storage Device Names
In a Linux system (and every other Unix like system), every device known to the kernel gets its own name during the device probing at bootup time. If the probing order does not change across reboots and if the hardware configuration remains the same, those names are stable and there is nothing to worry about.
With new technologies like FireWire, USB and Fibrechannel, devices can be added and removed at runtime. But even in the good old days it was not uncommon to add and remove drives, or to experience hardware failures at runtime. All this leads to a different probing order and the stable kernel device names are in danger for the next boot. The unexpected hardware change may not neccessary affect the actual used storage controllers and drives. Often the change/shift of device names comes from the not (yet) used drives.
A few things were invented to make the system configuration files independed from the kernel device names. The most common was the use of filesystem label and filesystem UUID. Using a volume manager like LVM or MD raid did also provide stable names, and other features.
But mapping a hardware drive to a specifc kernel device was not reliable until sysfs was introduced with Linux kernel 2.6. It allows the mapping from kernel device names back to the hardware behind it. With the help of udev it is now possible to access a drive not only by its kernel name, but also via symlinks with a stable name. It is up to the admin to finally name devices, he can add his own naming rules to the udev configuration.
Per default, every disk partition is accessible by its hardware path and thefilesystem UUID. If the disk provides some sort of unique serial number, or if the individual partitions have a filesystem label, this info will be used as well to construct a stable name. All the stable names are actually symlinks in /dev/disk/by-*, pointing to a real block device node: the kernel name.
The kernel device names must be still available, not only for backwardscompatibility. If one of the attached drives failes, the kernel uses its internal name to report the error. It doesnt know about the symlinks.
A detailed list can be found in the Linux kernel sources in Documentation/devices.txt.
There can be no recommendation about what naming scheme should be used for a system configuration. It all depends on what changes to the hardware setup may happen in the future. The use of LVM appears to be a reasonable choice, it provides human readable stable names, independent from the underlying kernel name. The default naming scheme in SuSE Linux is still the kernel name.
On a production system, a hardware failure usually means the loss of service. Using persistant device names may help to bring the system back online in short time until the old system state is restored.
On a test system, a hardware configuration change will not cause endless reconfiguration of already configured partitions.
A few thoughts about more advanced setups can be found below.
İçindekiler
device naming models
find by disk partition content
Logical Volume Manager
A volume manager combines serveral partitions into one group. Every logical partition will get its own unique name during configuration time. This name is independend from the kernel name, it is stored in the on-disk metadata. LVM, EVMS and MD software raid can be seen as such from the device naming point of view.
Filesystem UUID
Every Linux filesystem has a UUID, which is generated when the filesystem gets created on the disk partition. Quoting the manual page of uuidgen:
"The new UUID can reasonably be considered unique among all UUIDs created on the local system, and among UUIDs created on other systems in the past and in the future."
Filesystem Label
All Linux filesystems support a short human readable nickname, the filesystem label. The string length differs depending on the filesystem type. Typical values are 12 and 16 bytes. There is a great possibility of clashes when new disks with existing filesystems get added to the system.
find by hardware path to the disk
Every disk is connected to some kind of storage controller via a bus. The storage controller itself is connected to the cpu via some other kind of bus. This path uses the bus numbers of the involved hardware.
find by serial number of the disk
Every disk should have an unique serial number assigned by its manufacturer. This is usually the case for highend disks and for newer disks. But unfortunately, not every disk provides an unique number.
unique names in the boot process
There are 4 steps during boot up from the firmware to the Linux startup scripts. Each one has different naming rules to find disks.
- firmware loading the bootloader
- Every firmware has its own way to describe bootable devices. There are simple minded things like drive A: and D: in the PC BIOS world, and there are real hardware paths used on systems with OpenFirmware. The firmware itself provides some way to boot from a different drive than the currently default boot drive. This means that the configured bootloader device may not be the one where the bootloader was actually loaded from.
- bootloader loading the kernel
- Once the bootloader is running, it usually can access the drive where it was loaded from. But accessing other drives connected to the system is limited by the firmware capabilities, also scanning for other connected drives and other storage controllers is not always possible. This means that hardcoded device descriptions in the bootloader configuration file may not point to the correct drive. The bootloader allows the admin to edit the kernel commandline options to pass a correct root= value to the kernel.
- kernel mounting the root filesystem
- Once the kernel has initalized all hardware, it tries to mount the root filesystem as specified in the root= commandline option. This task is now done by a script in the initrd. Several root= options can be specified, but the last root= option is used. Once the root filesystem is checked and mounted, control is passed to /sbin/init.
- boot scripts mounting remaining filesystems
- Once /sbin/init is running, it runs the init scripts in /etc/init.d/. They use configuration files like /etc/fstab, /etc/mdadm.conf, /etc/lvm/lvm.conf, /etc/evms.conf, /etc/raw and others to configure remaining filesystems.
If the system hardware configuration changes unexpected, the 4 steps listed above need to deal with that fact in some way to get the system back into operating state. Depending on the used device naming scheme and the actual type of the hardware change, no configuration changes will be required on the Linux side.
- configure the boot device
- The value of the boot drive is usually stored in the nvram. It is possible to change this value from the Linux side on systems with OpenFirmware, other systems may only offer the firmware userinterface itself to set the boot drive.
With kernel version 2.6, sysfs allows reliable mapping from Linux kernel device names to OpenFirmware devicepath names. The devicepath is a hardware device path, on-disk content or disk serial numbers can not be used. In case of a boot failure, the SMS menu allows the admin to boot from a different bootable device via the 'Multiboot' menu.
- configure the kernel and initrd to load
- All configured kernels are usually stored on the same boot partition. Depending on the used firmware, they can be loaded from different drives. The firmware naming scheme has to be used because the bootloader can not guess what name the Linux kernel will assign to a given drive. The actual syntax depends on the used bootloader.
- configure the root filesystem
- The kernel (and the /init script in the initrd) use the root= kernel commandline option to find and mount the final root filesystem. This option is stored in the bootloader configuration file. But it can be modified in the bootloader userinterface before passing control to the kernel. root= value can contain a kernel devicenode name, or UUID= or LABEL= to look for on-disk partition identifiers.
- configure the data partitions
- There are several configuration files to configure access to data paritions and monitoring tools. They usually use kernel devicenode names, except /etc/fstab which understands also UUID= and LABEL= as device identifier. Some of the tools may expect only kernel devicenode names in their configuration files. mdadm, lvm, raw, evms and the bootloaders can handle also the /dev/disk/by-* naming scheme.
configuration with YaST
The YaST partitioner can be used to switch between naming schemes in /etc/fstab. Select a mount point and click on 'fstab options...'. This window allows you also the specify a filesystem label, even if mount by filesystem label is not used.
On i386 compatible systems, the changes made here are also propageted into the grub configuration file. But the changes are not propagated into the file /boot/grub/device.map. You have to change the kernel device for (hd0) manually to a persistant device name.
For other bootloaders like lilo, elilo and zipl, the global boot= and the root= values have to be adjusted manually. Use either the 'Edit configuration file' in the 'Other...' pulldown menu in YaST bootloader, or edit all related files after the installation .