Recovering an LVM Physical Volume

Yesterday disaster struck – during a CentOS/RedHat installation, the installer asked (not verbatim): “Cannot read partition information for /dev/sda. The drive must be initialized before continuing.”.

Now on this particular server, sda and sdb were/are a RAID1 (containing the OS) and a RAID5 partition respectively and sdc was/is a 4TB RAID5 partition from an externally attached disk chassis. This was a server re-installation and all data from sda and sdb had multiple snapshots off site. sdc had no backups of its 4TBs of data.

The installer discovered the drives in a different order and sda became the externally attached drive. I, believing it to be the internal RAID1 array, allowed the installer to initialise it. Oh shit…

Now this wouldn’t be the end of the world. It wasn’t backed up because a copy of the data exists on removable drives in the UK. It would mean someone flying in with the drives, handing them off to me at the airport, bringing them to the data center and copying all the data back. Then returning the drives to the UK again. A major inconvenience. And it’s also an embarrassment as I should have ensured that sda is what I thought it was via the installers other screens.

Anyway – from what I could make out, the installer initialised the drive with a single partition spanning the entire drive.

Once I got the operating system reinstalled, I needed to try and recover the LVM partitions. There’s not a whole lot of obvious information on the Internet for this and hence why I’m writing this post.

The first thing I needed to do was recreate the physical volume. Now, as I said above, I had backups of the original operating system. LVM creates a file containing the metadata of each volume group in /etc/lvm/backup in a file named the same as the volume group name. In this file, there is a section listing the physical volumes and their ids that make up the volume group. For example (the id is fabricated):

physical_volumes {
        pv0 {
                id = "fvrw-GHKde-hgbf43-JKBdew-rvKLJc-cewbn"
                device = "/dev/sdc"     # Hint only

                status = ["ALLOCATABLE"]
                pe_start = 384
                pe_count = 1072319      # 4.09057 Terabytes
        }
}

Note that after I realised my mistake, I installed the OS on the correct partition and after booting, the external drive became /dev/sdc* again. Now, to recreate the physical volume with the same id, I tried:

# pvcreate -u fvrw-GHKde-hgbf43-JKBdew-rvKLJc-cewbn /dev/sdc
  Device /dev/sdc not found (or ignored by filtering).

Eh? By turning on verbosity, you find the reason among a few hundred lines of debugging:

# pvcreate -vvvv -u fvrw-GHKde-hgbf43-JKBdew-rvKLJc-cewbn /dev/sdc
...
#filters/filter.c:121         /dev/sdc: Skipping: Partition table signature found
#device/dev-io.c:486         Closed /dev/sdc
#pvcreate.c:84   Device /dev/sdc not found (or ignored by filtering).

So pvcreate will not create a physical volume using the entire disk unless I remove partition(s) first. I do this with fdisk and try again:

# pvcreate -u fvrw-GHKde-hgbf43-JKBdew-rvKLJc-cewbn /dev/sdc
  Physical volume "/dev/sdc" successfully created

Great. Now to recreate the volume group on this physical volume:

# vgcreate -v md1000 /dev/sdc
    Wiping cache of LVM-capable devices
    Adding physical volume '/dev/sdc' to volume group 'md1000'
    Archiving volume group "md1000" metadata (seqno 0).
    Creating volume group backup "/etc/lvm/backup/md1000" (seqno 1).
  Volume group "md1000" successfully created

Now I have an “empty” volume group but with no logical volumes. I know all the data is there as the initialization didn’t format or wipe the drive. I’ve retrieved the LVM backup file called md1000 and placed it in /tmp/lvm-md1000. When I try to restore it to the new volume group I get:

# vgcfgrestore -f /tmp/lvm-md1000 md1000
  /tmp/lvm-md1000: stat failed: Permission denied
  Couldn't read volume group metadata.
  Restore failed.

After a lot of messing, I copied it to /etc/lvm/backup/md1000 and tried again:

# vgcfgrestore -f /etc/lvm/backup/md1000 md1000
  Restored volume group md1000

I don’t know if it was the location, the renaming or both but it worked.

Now the last hurdle is that on a lvdisplay, the logical volumes show up but are marked as:

  LV Status              NOT available

This is easily fixed by marking the logical volumes as available:

#  vgchange -ay
  2 logical volume(s) in volume group "md1000" now active

Agus sin é. My logical volumes are recovered with all data intact.

* how these are assigned is not particularly relevant to this story.

7 thoughts on “Recovering an LVM Physical Volume”

  1. Thank you for publishing this! This saved my ass and helped me recover most of a 1TB Logical Volume when I had accidentally dd’d over one of the drives involved!

  2. Hi Barry,

    I found your website by looking a solution for my problem, I wish you can help me!

    I had an LVM with two PV, md1 and md2 (both RAID1).
    The VG was called “archive”, and the LVM was “MaxtorLVM”.
    File system was ext3.
    Cause of my inexperience with LVM, I deleted all the stuff (with lvremove, pvremove and vgremove command) but without overwriting the content of md devices or reuse them.
    I thought that he was possible rebuild all the content of LVM, by recreate PV, VG and LV with the same size – parameter – uuid and so on..

    Unfortunately, I get an error when I try to mount the LVM device:

    # mount /dev/mapper/archive-MaxtorLVM /lvm/
    mount: you must specify the filesystem type

    # mount -t ext3 /dev/mapper/archive-MaxtorLVM /lvm/
    mount: wrong fs type, bad option, bad superblock on /dev/mapper/archive-MaxtorLVM,
    missing codepage or other error
    In some cases useful info is found in syslog – try
    dmesg | tail or so

    dmesg says:

    Apr 26 12:13:22 backup kernel: VFS: Can’t find a valid ext3 filesystem on dev dm-0.

    cat /proc/mdstat is ok.
    Other info:
    Debian stable, kernel 2.6.24.4

    Having the backup in /etc/lvm/archive and /etc/lvm/backup, It’s possibile to recover the content of the old LVM?

    Thanks and sorry if my english is not good.
    Marco

  3. For Marco,

    just try:
    mkdir /media/restore
    mount -t ext3 /dev/(your logical group)/(your logical volume) /media/restore

    I was a bit confused by your comment as to which isthe Logical Volume and which is your Logical Group. But what I did above worked. Remember, a Logical Volume is still an ext3 format when mounted.

    L

  4. Barry,

    Thanks for posting this. I had to replace an old drive, but wanted to get the data from it, and this was exactly what I needed. I had started a thread on the Debian User’s Forum asking how do to this, and got a couple of responses, but no one really knew, so I have posted a link to this page from there. Maybe you’ll see an uptick in hits.

  5. Hi,

    I have a RAID5 system that storage unit that was configured with EXT3 from a Slackware system configured. The Controller died so we decided to attached the RAID Unit to a Suse10ES unit that. Instead of just mounting the physical storage, the installer used yast and LVM Configuration and added the physical volume to a volume group. Now I cannot mount the system. Is my data gone? I have tried all the mounting methods. Any help will be appreciated.

  6. Hi,
    Can data on a pv be recovered after removing partition and doing steps described here ?
    thomas

Comments are closed.