Linux LVM provides a different way of looking at storage. On the server side this provides much more flexibility than standard disks with partitions. It allows us to migrate from old to new storage as well as expand the available storage to a particular file system easily (assuming that there is free space available).
Basically LVM abstracts the physical storage by creating Physical Volumes (PV), and each PV is divided into Physical Extents (PE) which are a consistent size. Physical Volumes are then added to a Volume Group (VG) which essentially pools all of the PE on all member PV. Once you have a VG you can then create a Logical Volume (LV) which can be formatted and used as if it were a physical disk, the difference is that the LV can be expanded as many times as needed up until all PE have been allocated within the VG. In this event you could then simply add another PV to the VG, which would make available additional PE to be added to a LV.
Expanding your storage to meet the requirements of your system is a great benefit and if that were the only thing that you got from LVM2 it would still be worth it… But there is more, snapshots and the ability to move a logical volume to a specific Physical Volume, as you would want to in the event of a disk failure or migration to new hardware.
Below I have outlined some of the commands you will need to effectively manage your LVM environment. In my test environment I have hardware RAID which presents a single PV to LVM. If you did not have hardware RAID you would see multiple PV. It is also important to note that LVM does not provide RAID, as such if your data needs the protection of RAID you must ensure you have RAID in addition to LVM.
Display Physical Volume Information
# pvs PV VG Fmt Attr PSize PFree /dev/sda5 testserver_vg lvm2 a- 1.23t 597.86g
# pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name testserver_vg PV Size 1.23 TiB / not usable 2.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 321442 Free PE 153052 Allocated PE 168390 PV UUID pR7PKf-5Sjy-3Zcf-ksZc-o5f6-eoIC-G1dZex
Display Volume Group Information
# vgs VG #PV #LV #SN Attr VSize VFree testserver_vg 1 4 0 wz--n- 1.23t 597.86g
# vgdisplay --- Volume group --- VG Name testserver_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 9 VG Access read/write VG Status resizable MAX LV 0 Cur LV 4 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.23 TiB PE Size 4.00 MiB Total PE 321442 Alloc PE / Size 168390 / 657.77 GiB Free PE / Size 153052 / 597.86 GiB VG UUID oel9Qw-17dO-dDce-63Lq-jRde-ooTx-qhmdhz
Display Logical Volume Information
# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert kvm testserver_vg -wi-ao 372.53g root testserver_vg -wi-ao 186.26g swap_1 testserver_vg -wi-ao 48.98g testvm testserver_vg -wi-a- 50.00g
# lvdisplay --- Logical volume --- LV Name /dev/testserver_vg/swap_1 VG Name testserver_vg LV UUID k0aHZW-CCpo-GS53-GodT-7tTH-9gw7-QXoRxT LV Write Access read/write LV Status available # open 1 LV Size 48.98 GiB Current LE 12540 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:0 --- Logical volume --- LV Name /dev/testserver_vg/kvm VG Name testserver_vg LV UUID Kmm4WS-joP4-5Em4-Xvmj-CxEJ-pBD3-BPmCFl LV Write Access read/write LV Status available # open 1 LV Size 372.53 GiB Current LE 95367 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:1 --- Logical volume --- LV Name /dev/testserver_vg/root VG Name testserver_vg LV UUID UQs2ai-8qyN-32Yv-6VEL-tCde-OdFD-XA5BAh LV Write Access read/write LV Status available # open 1 LV Size 186.26 GiB Current LE 47683 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:2 --- Logical volume --- LV Name /dev/testserver_vg/testvm VG Name testserver_vg LV UUID C2Oxdf-QBbr-PRKI-Bk9B-y4EE-Qf4B-7DXZd4 LV Write Access read/write LV Status available # open 0 LV Size 50.00 GiB Current LE 12800 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:3
Create a Logical Volume
# lvcreate -L10G -n testvm testserver_vg Logical volume "testvm" created
Extend a Logical Volume
Extending a LV is a three step process (1) Confirm free space in the VG (2) Extend the LV (3) Resize the file system.
# lvs VG #PV #LV #SN Attr VSize VFree testserver_vg 1 4 0 wz--n- 1.23t 597.86g
# lvextend -L60G /dev/testserver_vg/testvm or # lvextend -L+10G /dev/testserver_vg/testvm Extending logical volume testvm to 60.00 GiB Logical volume testvm successfully resized
# resize2fs /dev/testserver_vg/testvm
Create a Logical Volume Snapshot
When you create a snapshot you are essentially creating a second volume, and then all changes to the first volume are written to the second volume, leaving the first volume as a point in time snap.
# lvcreate -L1G -s -n testsnap /dev/testserver_vg/testvm
# lvremove /dev/testserver_vg/testsnap
Scan for Changes in Disk Layout
This can be helpful if you are moving disks from one machine to the other in the event of data migrations.
# pvscan # vgscan # lvscan
Prepare and Add Physical Disks to a Volume Group
Create a partition on the new disk and set its type to 8e so that it can be used by LVM.
# fdisk /dev/sdb # pvcreate /dev/sdb1 # vgextend testserver_vg /dev/sdb1
This will of course mean that by adding the PV you are gaining all of the PE which are contained therein and you would then be able to extend an LV if you were so inclined. You could also use this process to migrate data to a new replacement disk.
Move the Physical Extents to a New Physical Volume
LVM allows us to shift the locations of the PE. We can do this in two ways, we can either distribute all of the extents over the remaining PV or we can specify the PV we want it to use.
# pvmove /dev/sda or # pvmode /dev/sda /dev/sdb
Remove the Physical Volume from the Volume Group
# vgreduce testserver_vg /dev/sda