November 18

Linux: The general process of expanding live LVM partiitons in Linux

Overview

    The following information outlines the drive expansion process. No rebooting is required with this process and it does not require us to do LVM partition spanning.

    Please note: It is important to complete all of the steps below in entirety. This procedure involves deleting an active partition and recreating it while the server is running. 

Short version:
Live Expansion of LVM Partitions in VMWARE:

In VCENTER goto the server - Edit Settings
Expand Hard disk 1 to 60 GB - Note (VMs cannot have snapshots at this point)

*After the expansion has completed take a snapshot of the server. Make certain to include "Snapshot Memory".  

- Note: Commands follow 

1. Log into the server via SSH or the Console. Before starting, execute the following command:
 echo 1>/sys/class/block/sda/device/rescan

- This forces a rescan of the hard drives

2. Verify which partition you are working with
 fdisk -l /dev/sda

3. Display the partitions
 cat /proc/partitions | grep sd

4. Display the physical drive information
 pvs

5. Modify the partition table
 fdisk /dev/sda

---------------------------------------
List the partitions
p
Delete the partition
d
2
Create the partition
n
p
2
Enter
Enter
Set the partition type
t
2
8e
Write/commit your changes
w
---------------------------------------

- Ignore the error

6. Let the OS know there have been partition table changes
 partprobe 

You may get an error stating that the server needs rebooted.  Do not reboot, just execute the next command: partx -u /dev/sda

7. Verify that in-memory kernel partition table has been updated with the new size
 cat /proc/partitions | grep sd

- This should look larger than step 2

8. Resize the LVM's physical volume
 pvresize /dev/sda2
 1 physical volume(s) resized / 0 physical volume(s) not resized

9. Display the physical drive information
 pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <60.00g 2.00g

- This should be larger than step 3.

10. Display the names of the Logical Volumes
 lvdisplay
The format should look like:
  --- Logical volume ---
  LV Path                /dev/rhel/usr

11. Extend each volume to the appropriate size (this may vary per server as they do not all seem to be setup the same.)
 lvextend -L+16G /dev/rhel/usr
 lvextend -L+10G /dev/rhel/var
 lvextend -L+4G /dev/rhel/home

12. Grow the partion with in the logical volume to fill out the space
 xfs_growfs /dev/rhel/usr
 xfs_growfs /dev/rhel/var
 xfs_growfs /dev/rhel/home

13. Verify the drive space has increased
 df -h

14. You are finished

-------------------------

Long Version:
Live Expansion of LVM Partitions in VMWARE:

In VCENTER goto the server - Edit Settings
Expand Hard disk 1 to 60 GB - Note (VMs cannot have snapshots at this point)

*After the expansion has completed take a snapshot of the server. Make certain to include "Snapshot Memory".  

- Note: Commands follow 

1. Log into the server via SSH or the Console. Before starting, execute the following command:
 echo 1>/sys/class/block/sda/device/rescan

- This forces a rescan of the hard drives

2. Verify which partition you are working with
fdisk -l /dev/sda

Disk /dev/sda: 60.9 GB, 60899345920 bytes, 118,944,035 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b1786

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    62914559    29359104   8e  Linux LVM


3. Display the partitions
 cat /proc/partitions | grep sd
   8        0   83886080 sda
   8        1    2097152 sda1
   8        2   29359104 sda2
   8       16   41943040 sdb

4. Display the physical drive information
 pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <28.00g    0 

5. Modify the partition table

---------------------------------------
 fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p

Disk /dev/sda: 60.9 GB, 60899345920 bytes, 118,944,035 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b1786

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    62914559    29359104   8e  Linux LVM

Command (m for help): d
Partition number (1,2, default 2): 2
Partition 2 is deleted
df -h
Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): p
Partition number (2-4, default 2): 2
First sector (4196352-167772159, default 4196352): 
Using default value 4196352
Last sector, +sectors or +size{K,M,G} (4196352-118944035, default 118944035: 
Using default value 118944035
Partition 2 of type Linux and of size 60 GiB is set

Command (m for help): t
Partition number (1,2, default 2): 2
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
---------------------------------------

- Ignore the error

6. Let the OS know there have been partition table changes
 partprobe 

You may get an error stating that the server needs rebooted.  Do not reboot, just execute the next command: partx -u /dev/sda

7. Verify that in-memory kernel partition table has been updated with the new size
 cat /proc/partitions | grep sd
   8        0   83886080 sda
   8        1    2097152 sda1
   8        2   62914560 sda2
   8       16   41943040 sdb

- This should look larger than step 2

8. Resize the LVM's physical volume
 pvresize /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

9. Display the physical drive information
 pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <60.00g 2.00g

- This should be larger than step 3.

10. Display the names of the Logical Volumes
 lvdisplay
  --- Logical volume ---
  LV Path                /dev/rhel/root
  LV Name                root
  VG Name                rhel
  LV UUID                KGvb1z-IsoD-SIiz-D72J-YDal-aNtM-WFNETB
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-14 05:58:59 -0400
  LV Status              available
   open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/rhel/var
  LV Name                var
  VG Name                rhel
  LV UUID                uq7DEy-595R-dZin-HXiy-f0qS-JDAW-NUpN7m
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-14 05:59:00 -0400
  LV Status              available
   open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3
   
  --- Logical volume ---
  LV Path                /dev/rhel/swap
  LV Name                swap
  VG Name                rhel
  LV UUID                6zieq1-xgWt-vZVc-4bcJ-oYv6-dAa4-T313Qk
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-14 05:59:00 -0400
  LV Status              available
   open                 2
  LV Size                3.00 GiB
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/rhel/home
  LV Name                home
  VG Name                rhel
  LV UUID                Q2tfVr-f3A7-s5ca-1jGs-TkH0-TT5h-GjJnfX
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-14 05:59:00 -0400
  LV Status              available
   open                 1
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:4
   
  --- Logical volume ---
  LV Path                /dev/rhel/usr
  LV Name                usr
  VG Name                rhel
  LV UUID                2Ll4Ef-M0zU-rgB8-Fy2u-A9gx-VTFG-b3vomw
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-14 05:59:01 -0400
  LV Status              available
   open                 1
  LV Size                <4.00 GiB
  Current LE             1023
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2


11. Extend each volume to the appropriate size (this may vary per server as they do not all seem to be setup the same.)

 lvextend -L+16G /dev/rhel/usr
  Size of logical volume rhel/usr changed from <4.00 GiB (1023 extents) to <30.00 GiB (7679 extents).
  Logical volume rhel/usr successfully resized.

 lvextend -L+10G /dev/rhel/var
  Size of logical volume rhel/var changed from 10.00 GiB (2560 extents) to 30.00 GiB (7680 extents).
  Logical volume rhel/var successfully resized.

 lvextend -L+4G /dev/rhel/home
  Size of logical volume rhel/home changed from 1.00 GiB (256 extents) to 5.00 GiB (1280 extents).
  Logical volume rhel/home successfully resized.

12. Grow the partion with in the logical volume to fill out the space
 xfs_growfs /dev/rhel/usr
meta-data=/dev/mapper/rhel-usr   isize=512    agcount=4, agsize=261888 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=1047552, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 1047552 to 5241856

 xfs_growfs /dev/rhel/var
meta-data=/dev/mapper/rhel-var   isize=512    agcount=4, agsize=655360 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

 xfs_growfs /dev/rhel/home
meta-data=/dev/mapper/rhel-home  isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 1310720

13. Verify the drive space has increased
 df -h

Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root    10G  122M  9.9G   2% /
devtmpfs                3.9G     0  3.9G   0% /dev
tmpfs                   3.9G     0  3.9G   0% /dev/shm
tmpfs                   3.9G  373M  3.5G  10% /run
tmpfs                   3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/rhel-usr     20G  3.8G   17G  19% /usr
/dev/sdb                 40G  3.3G   37G   9% /data
/dev/sda1               2.0G  269M  1.8G  14% /boot
/dev/mapper/rhel-var     20G  988M   20G   5% /var
/dev/mapper/rhel-home   5.0G   41M  5.0G   1% /home
sagshared:/linuxshared  306G   30G  276G  10% /mnt/Linuxmnt
tmpfs                   783M     0  783M   0% /run/user/1000
tmpfs                   783M  4.0K  783M   1% /run/user/0
//Lshare/eip            306G   30G  276G  10% /mnt/Linuxmnt
tmpfs                   783M   12K  783M   1% /run/user/42


14. You are finished


Copyright 2021. All rights reserved.

Posted November 18, 2021 by Timothy Conrad in category "Linux

About the Author

If I were to describe myself with one word it would be, creative. I am interested in almost everything which keeps me rather busy. Here you will find some of my technical musings. Securely email me using - PGP: 4CB8 91EB 0C0A A530 3BE9 6D76 B076 96F1 6135 0A1B