1and1 have been regularly criticized for their unusual partitions on default installations of their dedicated servers – but this is the first time I have been affected. One of our dedicated servers started producing Postfix SMTP errors – caused by low disk space.
Upon investigation, the default partitions on my 1&1 Plesk 10.4 server came set at 4Gb, and my ‘var’ partition was full. The used space is all genuine files, so my only option was to increase the partition. On Windows this is quite a complex procedure, requiring additional applications – however following a simple guide made this a 5 minute procedure. Here are the steps that I took :
SSH onto your server. Once logged in, type
df - h to display the partition and logical volume sizes, including the used disk space. In my case, the var logical volume is 4Gb.
df > disk free space command, will display disk usage information.
-h > this option will display sizes in KB, MB or GB.
# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 3.7G 673M 3.1G 18% / /dev/mapper/vg00-usr 4.0G 1.3G 2.7G 33% /usr /dev/mapper/vg00-var 4.0G 3.6G 456M 89% /var /dev/mapper/vg00-home 4.0G 4.2M 4.0G 1% /home none 2.0G 10M 2.0G 1% /tmp
Next, type fdisk -l to view the total hard disk(s) size and partitions on the disk, in my case it seems that my RAID mirror shows the drives as individual.
# fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 487 3911796 fd Linux raid autodetect /dev/sda2 488 731 1959930 82 Linux swap / Solaris /dev/sda3 732 121601 970888275 fd Linux raid autodetect Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 487 3911796 fd Linux raid autodetect /dev/sdb2 488 731 1959930 82 Linux swap / Solaris /dev/sdb3 732 121601 970888275 fd Linux raid autodetect Disk /dev/md1: 4005 MB, 4005560320 bytes 2 heads, 4 sectors/track, 977920 cylinders Units = cylinders of 8 * 512 = 4096 bytes
Type the pvs command and press Enter.
pvs > Physical Volume Show command.
PV > Physical Volume path
VF > Volume Group name.
Fmt > LVM Format
Attr > Physical volume attributes. The a attribute means that the physical volume is allocatable and not read-only.
PSize > Physical Size of the physical volume.
PFree > Physical Free space left on the physical volume.
# pvs PV VG Fmt Attr PSize PFree /dev/md3 vg00 lvm2 a- 925.91G 913.91G
Since the logical volume assigned to /var is only 4GB, I will be increasing this to 10GB using the lvextend command. The command below is to be used as reference only as the parameters will be different depending on your scenario.
lvextend > This is the logical volume extend command used to make a logical volume larger.
-L +6G > It is specified using the Logical volume size option, how much larger to make the volume. In this scenario, 6 gigabytes is added to the current 4 gigabyte volume to result in a 10 gigabyte volume.
/dev/mapper/vg00-var > The path to the logical volume is specified last. The path to the volume to be extended was taken from the output from the second step in this guide.
# lvextend -L +6G /dev/mapper/vg00-var Extending logical volume var to 10.00 GB Logical volume var successfully resized
df -h to display the disk free space once again. The lvextend operation finished successfully in the last step however the /dev/mapper/vg00-var size is still only showing 4.0G. This is because while the logical volume was increased successfully, the file system needs to be extended to take advantage of the full space of the logical volume.
# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 3.7G 673M 3.1G 18% / /dev/mapper/vg00-usr 4.0G 1.3G 2.7G 33% /usr /dev/mapper/vg00-var 4.0G 3.6G 461M 89% /var /dev/mapper/vg00-home 4.0G 4.2M 4.0G 1% /home none 2.0G 10M 2.0G 1% /tmp
lvs to show the logical volume information once again. Here, we can confirm that the logical volume has successfully been extended to 10 gigabytes. In the next steps, we will increase the file system to match the logical volume size.
# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert home vg00 -wi-ao 4.00G usr vg00 -wi-ao 4.00G var vg00 -wi-ao 10.00G
mount to display the mounted file systems. From the output, we find that the /dev/mapper/vg00-var logical volume is using xfs.
# mount /dev/md1 on / type ext3 (rw) none on /proc type proc (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/mapper/vg00-usr on /usr type xfs (rw) /dev/mapper/vg00-var on /var type xfs (rw,usrquota) /dev/mapper/vg00-home on /home type xfs (rw,usrquota) none on /tmp type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
To increase the file system to match that of the logical volume, we will use the
xfs_growfs /var will extend the file system to the 10 gigabyte limit of the logical volume.
# xfs_growfs /var meta-data=/dev/vg00/var isize=256 agcount=4, agsize=262144 blks = sectsz=512 attr=2 data = bsize=4096 blocks=1048576, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 1048576 to 2621440
df -h to display the disk free space to confirm that the file system has been extended.
# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 3.7G 673M 3.1G 18% / /dev/mapper/vg00-usr 4.0G 1.3G 2.7G 33% /usr /dev/mapper/vg00-var 10G 3.6G 6.5G 36% /var /dev/mapper/vg00-home 4.0G 4.2M 4.0G 1% /home none 2.0G 10M 2.0G 1% /tmp
Hopefully this will buy me a bit more time (and space!)…