This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY, either expressed or implied. While every effort has been taken to ensure the accuracy of the information documented herein, the author(s)/editor(s)/maintainer(s)/contributor(s) assumes NO RESPONSIBILITY for any errors, or for any damages, direct or consequential, as a result of the use of the information documented herein.
Wichtig: vor dem Murksen kommt das Lesen! Man muß schließlich wissen, was man da eigentlich treibt... Man-Pages, Info-Dateien, HOWTOS, READMEs, etc. Falls manche Befehle unverständlich sind wegen ihren Kommandozeilenparamentern, dann ist es an der Zeit, die man-Page zu lesen...
Logical Volume Manager HOWTO, also try your local copy in /usr/share/doc/howto/en/html/LVM-HOWTO.html.
Prepare partition /dev/sdb1 for being a member in the LVM system. Two steps:
shell$ fdisk /dev/sdb Command (m for help): p Disk /dev/sdb: 255 heads, 63 sectors, 555 cylinders Units = cylinders of 16065 * 512 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 555 4458006 8e Linux LVM Command (m for help): t Partition number (1-4): 1 Hex code (type L to list codes): 8e Command (m for help): w [...] shell$Check it:
shell$ fdisk -l /dev/sdb Disk /dev/sdb: 255 heads, 63 sectors, 555 cylinders Units = cylinders of 16065 * 512 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 555 4458006 8e Linux LVM
shell$ pvcreate /dev/sdb1
Now we need to tell the LVM system which devices/partitions are part of the LVM system.
If LVM has never been used on your system before, it is lacking its database files /etc/lvmtab and /etc/lvmtab.d/*. You can search for them in your filesystem, or the first invocation of one of the vg*-commands will tell you so.
So we create the database:
shell$ vgscan --verbose
Now the LVM system knows where to store its meta information. So we can start throwing partitions into the LVM system. As this would create a mess, LVM provides the concept of Volume Groups (VGs); You can have one or more VGs where you can throw partitions into. And you can name these VGs as you like (and rename them afterwards, too).
We create one VG named "test_vg" and stick the partition /dev/sdb1 into it:
shell$ vgcreate test_vg /dev/sdb1
You may have prepared further partitions, i.e. /dev/sdc6; stick it into VG "test_vg", too:
shell$ vgextend test_vg /dev/sdc6
Now we have gathered all/most of our unused/available partitions in one great pool, a VG called "test_vg".
Now it would be fine to create one huge "virtual partition" from this pool, create a filesystem on it, mount it somewhere, be happy and go to sleep.
But stop. - Your sleep wouldn't last long, because of the following experiences in Real Life:
Your users did/could not, did not want to, were not able to estimate/forecast their disk space requirements. They just wanted "all the rest" in /where/they/thought/they/need-it/. Now they want more space in /somewhere/else/, and they are far from spending "all the rest" in /where/they/thought/they/need-it/... Now you are locked out, because all your disk space is allocated in the pool under /where/they/thought/they/need-it/...
But LVM has foreseen this and provides the solution. It lets you split the each VG into one ore more Logical Volumes (LVs). These LVs are variable in size, and they are the new "partitions" where you can create filesystems on and mount them as usual.
Let's do that now, within VG "test_vg" create two LVs named "test00_lv" (1600MB in size) and "test01_lv" (500MB in size):
shell$ lvcreate --size 1600M --name test00_lv test_vg shell$ lvcreate --size 500M --name test01_lv test_vgThis creates the following device special files:
shell$ ls -l /dev/test_vg total 46 crw-r----- 1 root disk 109, 0 Mar 15 21:24 group brw-rw---- 1 root disk 58, 0 Mar 15 21:28 test00_lv brw-rw---- 1 root disk 58, 1 Mar 15 21:28 test01_lv
We use /dev/test_vg/test0[10]_lv as "partitions", create (ext2-) filesystems on them and mount them (also make an entry in /etc/fstab):
shell$ mkfs -t ext2 /dev/test_vg/test00_lv shell$ mkfs -t ext2 /dev/test_vg/test01_lv shell$ mount /dev/test_vg/test00_lv /where/they/thought/they/need-it/ shell$ mount /dev/test_vg/test01_lv /somewhere/else/
Now go to sleep...
Suddenly (after your winter sleep) the users ring your phone, wanting more disk space under /somewhere/else/... - This is the hour where LVM shows its strength.
You hurry to your glass teletype to see how you can shift disk space space from /where/they/thought/they/need-it/ (/dev/test_vg/test00_lv) to /somewhere/else/ (/dev/test_vg/test01_lv). Type in the following commands:
shell$ vgdisplay --- Volume group --- VG Name test_vg VG Access read/write VG Status available/resizable VG # 0 MAX LV 256 Cur LV 2 Open LV 1 MAX LV Size 255.99 GB Max PV 256 Cur PV 2 Act PV 2 VG Size 6.33 GB PE Size 4 MB Total PE 1621 Alloc PE / Size 525 / 2.05 GB Free PE / Size 1096 / 4.28 GB VG UUID BKIj2D-wOBk-JAXO-SZsu-j5MD-3hCH-2LTIvo
You see that VG "test_vg" has a total size of 6.33 GB ("VG Size").
shell$ lvdisplay /dev/test_vg/test0[01]_lv --- Logical volume --- LV Name /dev/test_vg/test00_lv VG Name test_vg LV Write Access read/write LV Status available LV # 1 # open 0 LV Size 1.56 GB Current LE 400 Allocated LE 400 Allocation next free Read ahead sectors 120 Block device 58:0 --- Logical volume --- LV Name /dev/test_vg/test01_lv VG Name test_vg LV Write Access read/write LV Status available LV # 2 # open 0 LV Size 500 MB Current LE 125 Allocated LE 125 Allocation next free Read ahead sectors 120 Block device 58:1
You see that /dev/test_vg/test00_lv has size 1600 MB ("LV Size), /dev/test_vg/test01_lv has size 500 MB ("LV Size").
Now you notice that calculating in MB is inaccurate. Further you need to know that you can only resize LVs by a minimum unit called "Logical Extends" (LEs); these are internally mapped to "Physical Extends" (PEs) on the VG/PV-level. Their sizes are equal and can be seen from the output from the vgdisplay-command, "PE Size" (here: 4 MB).
Let's do some calulation:
shell$ bc -q total_pe=1621 # from vgdisplay, "Total PE" le_test00_lv=400 # from lvdisplay, "Current LE" le_test01_lv=125 # from lvdisplay, "Current LE" rest = total_pe - le_test00_lv - le_test01_lv rest 1096 ^D shell$
You have two options here:
We discuss option 1 first, because it is easier.
At this point we must notice that we have filesystems on "virtual partitions" (LVs), and if we resize the underlying partitions, we destroy the filesystems! Two cases:
So what to do? - Two ways:
Here is a list (date: Sam Mär 16 14:19:54 MET 2002), which filesystems support resizability:
FS type | resizability | via command |
---|---|---|
ext2 | unmounted | e2fsadm |
reiserfs | unmounted | lvextend/lvreduce,resize_reiserfs |
todo @@@@@@@@@@, list to complete, others to come
So we want to resize /dev/test_vg/test01_lv (/somewhere/else/); Let's assume we have a non-resizable filesystem on it:
shell$ umount /dev/test_vg/test01_lv
shell$ lvextend --size +700M /dev/test_vg/test01_lv --verboseThese 700MB are taken from the unallocated PEs (rest=1096) in VG "test_vg".
shell$ mkfs -t ext2 /dev/test_vg/test01_lv
shell$ mount /dev/test_vg/test01_lv /somewhere/else/
A lot of work... Now assume we we have a unmounted-resizable filesystem on /dev/test_vg/test01_lv, like ext2:
shell$ umount /dev/test_vg/test01_lv shell$ e2fsadm --size +700M /dev/test_vg/test01_lv --verbose shell$ fsck -t ext2 -f /dev/test_vg/test_lv shell$ mount /dev/test_vg/test01_lv /somewhere/else/
Now extend /dev/test_vg/test01_lv with reiserfs (reiserfsprogs-3.x.0j, RTFM 'man resize_reiserfs'!):
shell$ umount /dev/test_vg/test01_lv shell$ lvextend --size +700M /dev/test_vg/test01_lv --verbose shell$ resize_reiserfs -s +700M /dev/test_vg/test01_lv shell$ mount /dev/test_vg/test01_lv /somewhere/else/
Now shrink /dev/test_vg/test01_lv with reiserfs (reiserfsprogs-3.x.0j, RTFM 'man resize_reiserfs'!):
shell$ umount /dev/test_vg/test01_lv shell$ resize_reiserfs -s -700M /dev/test_vg/test01_lv shell$ lvreduce --size -700M /dev/test_vg/test01_lv --force shell$ mount /dev/test_vg/test01_lv /somewhere/else/
Still a lot of work... Now assume we we had a mounted-resizable filesystem on /dev/test_vg/test01_lv, like @@@non-existant-fs???@@@@:
shell$ resize_@@@non-existant-fs???@@@@ -s +700M /dev/test_vg/test01_lv
Use LVM...
Use a filesystem that conforms to the following criteria:
@@@@@@@@@ TODO
You can only shift PVs between VGs. Shifting LEs/PEs between VGs wouldn not work because PE are mapped to LE, and each PE belongs to one PV.
shell$ lvmdiskscan
shell$ lvmdiskscan --lvmpartition
shell$ lvmdiskscan | \ grep -i "0x8e" | \ while read foo1 foo2 device rest ; do echo $device ; done | \ xargs pvdisplay --colon
shell$ vgdisplay --verbose | grep "PV Name"
shell$ pvmove ...
shell$ vgreduce ...
Only VGs with _no_ open LVs can be deactivated! (See next question.) Then
shell$ vgchange --available n test_vg shell$ vgrename test_vg vg00
shell$ vgdisplay --verbose | grep -e "Open LV" -e "LV Name" -e "# open" Open LV 1 LV Name /dev/test_vg/test00_lv # open 0 LV Name /dev/test_vg/test01_lv # open 1
Now close the open LV /dev/test_vg/test01_lv:
shell$ umount /dev/test_vg/test01_lv shell$ vgdisplay --verbose | grep -e "Open LV" -e "LV Name" -e "# open" Open LV 1 LV Name /dev/test_vg/test00_lv # open 0 LV Name /dev/test_vg/test01_lv # open 0
Now you can do things like vgchange ... test_vg.
Use lsof .
Yes, you can. You do not have to create partitions on your disks; you can do pvcreate /dev/sdb.
@@@@@@@@@ TODO
I expect the answer to be 'No', because the bootloader LILO needs to know, where the kernel image is located. I believe, that the kernel image must be located within one partition. With LVM it could happen, that kernel image gets spread over different partitions or even disks. - Is there someone out there who knows better? Please let me know.
@@@@@@@@@ TODO
© 2002 ich Created: Sam Mär 16 18:26:22 MET 2002 Last updated: 2007-09-13T13:34:30+0200 EOF